The issue in the case according to www.scotusblog.com is “Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.“
But because plaintiffs “brought their lawsuit under the Antiterrorism Act, arguing that Google (which owns YouTube) aided ISIS’s recruitment by allowing ISIS to post videos on YouTube that incited violence and sought to recruit potential ISIS members, and by recommending ISIS videos to users through its algorithms” it would appear that Section 230(c)(1) of the Communications Decency Act is not really the issue. Because in the language of Section 230 of the Communications Decency Act, the authors did not intend Section 230 to over-ride or supersede other laws designed to protect persons, including criminal law. “Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.“
Therefore the issue in this case really is, did YouTube provide aide and assistance to ISIS by allowing videos which discussed ISIS’s beliefs, motivations, religious ideology, anti-western principles, Islamic fundamentalism onto their site and, in so doing, did YouTube violate the Antiterrorism Act.
If YouTube is found to have violated the Antiterrorism Act then Sction 230(c)(1) of the Communications Decency Act does not apply since it is written into that act that it does not “impair the enforcement” of “any other Federal criminal statute.“
And, if YouTube is found to not have violated the Antiterrorism Act, then the original case is resolved because in allowing the original material to be posted YouTube is afforded the immunity granted by Section230(c)(1) of the Communications Decency Act.
But if Congress, the media, and the Supreme Court are eager to litigate Section 230(c)(1) of the Communications Decency Act and are willing to use this case to do so, even if it has no bearing, then I think the following is what they will get wrong.
- Recommendations, as long as they do not violate any Federal criminal statute, should be protected speech under the Supreme Court guidelines by which individual and commercial speech are protected under the 1st Amendment. Your local librarian, in seeing the stack of books you are checking out, could make a recommendation of another author, publisher, or title and that is protected speech. Your local clergyman or imam could learn that you are interested in a certain topic and recommend books or religious texts for your future study, and that is protected speech. Your local bookseller, or Amazon using an algorithm, could examine your purchase history and, based on that, make recommendations, and that is protected speech. And YouTube, using its algorithm to examine your viewing history on their platform, can recommend other content that is available on their platform, and that too has to be protected speech.
One does not have to be a “publisher” of the information in order to make a recommendation for other content.
- URLs in recommendations means that YouTube is “generating” it own content related to material on its site, and therefore, YouTube is acting as a publisher. URLs are, at present, the organizing structure of the internet. In order to serve content online, YouTube MUST provide URLs to that content. To suggest that URLs moves one from being a repository to a publisher would be the same as suggesting that a librarian who provides a patron with a call number for a book is now somehow responsible for the content of the book. Let’s not forget, that in serving content online, libraries display a list of URL links to their patrons through their web portal. It is simply preposterous to suggest that by creating and sharing a URL link to content makes one legally responsible for the content.
- Intelligence is allowed. The argument that recommendations have to be algorithmically similar to be protected is bunk. A librarian examining a stack of books that a patron is checking out could make different recommendations for future reading based on the patron’s age, gender, or other traits that the librarian can discern about that person. One would expect that a librarian would make different recommendations to two different patrons who where checking out books on the same subject. For example, a patron wearing a MAGA hat might get different recommendations from a person wearing a Bernie Sanders pin, from a librarian who noticed they were both checking out books on the 2020 election.
It does not mean that the librarian’s recommendation is suddenly no longer protected speech because the librarian took these factors into consideration when making a recommendation. And a librarian does not become a “publisher” or promoter of one view point or the other, just because they adjusted their recommendations.
I think the one criticism of the recommendation algorithms that is valid is that they are not intelligent enough. These algorithms, because they know so much more about you than your librarian, should be able to detect dangerous trends in someone’s history and, when they detect that, they should start to suggest, every once in a while, an “alternative” opinion. This is difficult, of course, because none of us want to be shaken out of our ingrained ideologies. But, especially when the algorithm believes that the trend may show a dangerous trend, it needs to take a little risk, and offer an alternative. Even if, in offering that alternative, it causes a person to retreat from that platform.
I think this last point is critical to what the media, Congress, and the Supreme Court will likely get wrong, and that is that most people who go down dangerous rabbit holes were looking to go down that rabbit hole in the first place. And I don’t mean that they knew what rabbit hole they were looking for… what I mean is that they were already susceptible to radicalization. They already held strong religious beliefs, patriotic beliefs, political beliefs, or anti-social beliefs. They already had a strong belief that they themselves were righteous, moral, patriots. They were already leaning towards an US v THEM view of the world.
According to the AP, “The key defendant in the 2015 Paris attacks trial said Wednesday the coordinated killings were in retaliation for French airstrikes on the Islamic State group, calling the deaths of 130 innocent people “nothing personal” as he acknowledged his role for the first time.” But one could read the AP report cited above about the 2015 attacks and scream, “why is the AP giving a voice to these terrorists?” “Why is the AP allowing them to spew their vile about this being ‘nothing personal’?”
“We fought France, we attacked France, we targeted the civilian population. It was nothing personal against them,” Abdeslam said. “I know my statement may be shocking, but it is not to dig the knife deeper in the wound but to be sincere towards those who are suffering immeasurable grief.”
George Salines, whose daughter Lola was among the 90 dead inside the Bataclan, refused to accept Abdeslam’s rationale.
“To explain that what we wanted to target was France and not individual persons -– right, except it was people who were injured and killed, innocent people, targeted voluntarily. It’s morally unacceptable,” he said.
https://apnews.com/article/europe-france-trials-paris-brussels-f2031a79abfae46cbd10d4315cf29163
In the West we are comfortable seeing the world through a Western media filter. A media filter that reinforces our national radicalization, our national US v THEM. We accept that not all of the news we see is correct. We accept that there may be a pro-western slant. Or a pro-democratic slant. Or a pro-Judaeo-Christian slant.
What we want desperately is for somebody to tell us that our view of the world is true. That we are right. That our causes are just.
What will we do if the AI we invent disagrees, and if the AI is not smart enough to lie to use about what it truly thinks?
We can find the terrorists to be morally unacceptable. We can place the blame for that on YouTube and we can ask the courts to intervene. What the media, Congress, and the Supreme Court will likely get wrong here is that there may be no legal recourse, that this could be a case where our morality may be what we need to call into question.
None of us want innocent civilians to be killed… but… we want to stop Putin and we want to stop Assad and we want to stop Bin Laden and we want to stop Hussein. Innocent civilians keep on getting in the way and we believe that it is nothing personal against them.
The big difference between how we see ourselves and how we see the people we want to stop, is that we tell ourselves that we know what is morally unacceptable. That we can adjudicate what is morally unacceptable. That our courts can intervene and stop what is morally unacceptable.
And in this case what our courts are being asked to stop is YouTube.