Prime Minister Theresa May has been warned that her promise to tighten regulation on tech firms after the London attacks will not work.
Mrs May said areas of the internet must be closed because tech giants provided a “safe space” for terrorist ideology.
But the Open Rights Group said social media firms were not the problem, while an expert in radicalisation branded her criticism “intellectually lazy”.
Twitter, Facebook and Google said they were working hard to fight extremism.
Google (which owns Youtube) Facebook (which owns WhatsApp) and Twitter were among tech companies already facing pressure to tackle extremist content – pressure that intensified on Sunday.
Mrs May said: “We cannot allow this ideology the safe space it needs to breed.
“Yet that is precisely what the internet, and the big companies… provide.”
On ITV’s Peston on Sunday, Home Secretary Amber Rudd said an international agreement was needed for social media companies to do more to stop radicalisation.
“One (requirement) is to make sure they do more to take down the material that is radicalising people,” Mrs Rudd said.
“And secondly, to help work with us to limit the amount of end-to-end encryption that otherwise terrorists can use,” she said.
But the Open Rights Group, which campaigns for privacy and free speech online, warned that politicians risked pushing terrorists’ “vile networks” into the “darker corners of the web” by more regulation.
“The internet and companies like Facebook are not the cause of hate and violence, but tools that can be abused.
“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming,” Open Rights said.
Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of Mrs May.
He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.
“This has not solved problem, just made it different… moreover, few people (are) radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.”
However, Dr Julia Rushchenko, a London-based research fellow at the Henry Jackson Centre for Radicalisation and Terrorism, told the BBC that Mrs May was right, and that more could be done by tech giants to root out such content.
She felt that the companies erred on the side of privacy, not security. “We all know that social media companies have been a very helpful tool for hate preachers and for extremists,” Dr Rushchenko said.
The online world had been a recruiting aid for foreign fighters, and social media needed “stricter monitoring”, both by government agencies and by third party groups that have been created to flag up extremist content.
‘No place on our platform’
However, the major social media firms said on Sunday that they were working hard to rid their networks of terrorist activity and support.
Facebook said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement. “
Google said it was “committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online”.
It said it was already working on an “international forum to accelerate and strengthen our existing work in this area” and had invested hundreds of millions of pounds to fight abuse on its platforms.
Twitter said “terrorist content has no place on” its platform.
“We continue to expand the use of technology as part of a systematic approach to removing this type of content.
“We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia.”
Analysis: Joe Lynam, BBC business correspondent
Calling for technology companies to “do more” has become one of the first responses by politicians after terror attacks in their country.
Theresa May’s comments on that subject were not new – although the tone was.
She has already proposed a levy on internet firms, as well as sanctions on firms for failing to remove illegal content, in the Conservative party manifesto published three weeks ago.
Given that 400 hours of videos are uploaded onto Youtube every minute, and that there are 2 billion active Facebook users, clamping down on sites which encourage or promote terror needs a lot of automatic detection – as well as the human eye and judgement.
Technology companies such as Microsoft, Google, Twitter and Facebook are all part of an international panel designed to weed out and prevent terror being advocated worldwide.
That involves digitally fingerprinting violent images and videos as well as sharing a global database of users who may be extremist.