Now that U.S. politicians know that Facebook is unable to stop malicious actors from using it to influence public opinion, the social network is doing its best to avoid tougher regulatory treatment. It shouldn't be able to get off so easily, though: It must admit it's really a media company and accept legal liability for the material it publishes. As part of Facebook's preemptive campaign, which has already seen Chief Executive Officer Mark Zuckerberg promise more transparency and more human control on the advertising side of the business, Chief Operating Officer Sheryl Sandberg gave a mea culpa interview to Axios last week. "It's not just that we apologize," she said. "We're angry, we're upset. But what we really owe the American people is determination." She immediately devalued that strong statement, however, by continuing to claim that Facebook was not a media company: At our heart we're a tech company; we hire engineers. We don't hire reporters, no one's a journalist, we don't cover the news. In response to this, Wired magazine published a handy guide for executives on how to tell if you're a media company, starting with a delicious question worthy of Captain Obvious: "Are you the country\u2019s largest source of news?" It goes on to repeat well-known facts about Facebook - it sells ads against content, it hires moderators, it censors certain types of content, it commissions content providers to create original products and so on - that place it squarely in the media category. I can only add that Facebook doesn't need to hire (or pay) journalists to produce journalism: They do it for free for the social media giants. You don't have to call someone who produces journalistic content a journalist, either. Words change little about the nature of the output. In the U.S., the biggest reason Facebook doesn't want to be a media company despite overwhelming evidence that it is one is the 1995 Communications Decency Act. It protects "interactive computer service providers" from liability for the content they carry if it's not produced directly by them. Facebook and Twitter didn't exist in 1995, so the law was largely meant to protect internet service and web hosting providers. That's fair: They're no more responsible for the content than pulp and paper producers or newsagents. Facebook, however, should not enjoy the same protection. Thanks to the First Amendment, it's extremely difficult to mount a legal attack against the producers of fake news in the U.S. Courts have been reluctant to restrict speech in any form, and it's more difficult to prove than in most other countries that an outlet or a person knowingly published libelous information. Yet even in this legal environment it is sometimes possible to fight back, as yogurt maker Chobani did against conspiracy theorist Alex Jones, who claimed the firm had been "caught importing migrant rapists." Jones settled the lawsuit and apologized. Today, if the potentially libelous information was published on a platform like Facebook or Twitter, the platform won't be liable for any damages. In media, however, both the outlet and the journalist can be held responsible. That's where the regulation of social media needs to change. Making Facebook directly responsible for everything it publishes would probably be going too far, but not for the reasons Facebook itself puts forward. It keeps saying it's unable to police the vast sea of content its billion users produce. That's a flawed argument; attracting hundreds of millions of unpaid writers and refusing to edit them because there are too many wouldn't have saved any other news outlet from liability. Practically speaking, however, the goal of any new regulation wouldn't be to bury Facebook and its rivals in lawsuits: There should be a way for them to make a transition to surviving as legitimate media businesses. It might be a better idea to make Facebook, Twitter and other liable only in cases where the original producer of the offending content cannot be traced. Take the case of a Russian troll farm that can set up any number of fake accounts and pages to spread noxious content. Facebook has great difficulty locating such accounts and pages even when their holders actually pay it for ads. If no money changes hands, the job becomes even more daunting. What if Facebook could be sued as the publisher of libel if it failed to provide a plaintiff with the bona fides of a potential defendant? My guess is that, unlike today, it would work diligently to make sure people follow its stated policy: "Facebook users provide their real names and information," multiple accounts are banned and a suspended user cannot open a new account without permission. How it would do that is really up to Facebook; there are multiple ways to make sure all the users are real people who are responsible for what they publish for everyone to see. An incentive to identify users properly would also solve the advertising transparency problem, which, however, is less important than the traceability of public posts to specific authors. If Facebook, contrary to its own rules, wants to provide anonymity in the name of some lofty ideal like giving a voice to dissidents, it should be prepared to pay for it when necessary - a laudable undertaking that media liability insurance could cover. If anything good comes out of the Russia scandal, it should be a level playing field for media and a recognition that publishing information comes with responsibilities. Engineers or no engineers, Facebook managers are as capable of taking them on as media managers and journalists.