No, Dogs Are Not ODing After Eating Fentanyl-Laced Human Poo
First there were police officers falsely claiming to overdose simply from being in the presence of fentanyl. Then there were the cops giving their K-9s naloxone for the same reason. But the latest fentanyl moral panic is a lot grosser: pet owners—and a growing contingent of drug war proponents decrying San Francisco as a dystopian hell hole—are warning people that dogs are getting high from eating the feces of drug users.
Earlier this week, a San Francisco-based reporter tweeted about how a dog ingested “Human Feces Tainted W/Marijuana & Opioids.”
The reporter quoted a pet owner named Jackie who said her one-year-old “Himalayan poodle” (note: this is not a real dog breed) Pockets ate poo last week and was subsequently “wobbling & her tail was down.”
“Pockets is lucky she didn’t need Narcan. A vet tech told me she recently saw a dog that ate human poo w/meth,” the journalist said, adding the vet who saw Pockets said she comes across this scenario a few times a week.
A similar story involving a dog eating meth-laced poo in San Francisco went viral last year.
But a toxicologist told VICE News because of the way the human body processes drugs, the theory doesn’t make sense.
“It would require a lot of work and a lot of eating of feces,” said Dr. Ryan Marino, medical director of toxicology and addiction medicine at University Hospitals in Cleveland.
The reason it would be so hard is because when it comes to opioids and amphetamines, they’re often completely metabolized by the body or there’s a very minimal amount in the waste, he continued.
“Any amount of usable drug in feces from someone who's using drugs is going to be essentially negligible.”
Marino said the one exception could be THC (the psychoactive cannabinoid in weed), which is metabolized into a non-active metabolite in human feces. He said that wouldn’t have any psychoactive effect in humans, but he wasn’t sure if it could have an effect on dogs.
A post from the American Society for the Prevention of Cruelty to Animals noted that more dogs are getting stoned due to consuming weed edibles and that “exposures in indirect ways have also been reported, including consumption of human feces.” A report published in the Australian Veterinary Journal in 2022 that looked at 15 dogs that ate human poo found that “ingestion of human feces containing THC may lead to marijuana toxicosis (poisoning) in dogs.” (Luckily, even if your dog eats weed, they are unlikely to die from it.)
When it comes to fentanyl, Marino said dogs actually have a higher tolerance than humans—even if someone left the actual drugs out as opposed to feces.
As for why this urban legend seems to be gaining popularity, he said it plays well in pro drug war circles and those being targeted by that agenda.
“Saying that now this is a threat to your beloved dog certainly plays as well to a more wealthy conservative audience.”
Regardless, it’s probably best not to let your dog eat a big pile of shit.
Linux Foundation and pals – including Intel – back software ecosystem around RISC-V
Linux Foundation Europe and a number of big names in tech have banded together to drive development of a comprehensive software ecosystem that supports the open standard RISC-V processor architecture.…
Millions of PC motherboards were sold with a firmware backdoor

Enlarge (credit: BeeBright/Getty Images)
Hiding malicious programs in a computer’s UEFI firmware, the deep-seated code that tells a PC how to load its operating system, has become an insidious trick in the toolkit of stealthy hackers. But when a motherboard manufacturer installs its own hidden backdoor in the firmware of millions of computers—and doesn’t even put a proper lock on that hidden back entrance—they’re practically doing hackers’ work for them.
Researchers at firmware-focused cybersecurity company Eclypsium revealed today that they’ve discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte, whose components are commonly used in gaming PCs and other high-performance computers. Whenever a computer with the affected Gigabyte motherboard restarts, Eclypsium found, code within the motherboard’s firmware invisibly initiates an updater program that runs on the computer and in turn downloads and executes another piece of software.
While Eclypsium says the hidden code is meant to be an innocuous tool to keep the motherboard’s firmware updated, researchers found that it’s implemented insecurely, potentially allowing the mechanism to be hijacked and used to install malware instead of Gigabyte’s intended program. And because the updater program is triggered from the computer’s firmware, outside its operating system, it’s tough for users to remove or even discover.
Judge Bans AI-Generated Filings In Court Because It Just Makes Stuff Up
A district judge in Texas released an order on Tuesday that banned the usage of generative artificial intelligence to write court filings without a human fact-check as the technology becomes more common in legal settings despite its well-documented shortcomings, such as making things up.
“All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” the order by Judge Brantley Starr stated.
This decision follows an incident where a Manhattan lawyer named Steven A. Schwartz used ChatGPT to write a 10-page brief that cited multiple cases that were made up by the chatbot, such as “Martinez v. Delta Air Lines,” and “Varghese v. China Southern Airlines.” After Schwartz submitted the brief to a Manhattan federal judge, no one could find the decisions or quotations included, and Schwartz later admitted in an affidavit that he had used ChatGPT to do legal research.
Even Sam Altman, the CEO of ChatGPT maker OpenAI, has warned against using the chatbot for more serious and high-stakes purposes. In an interview with Intelligencer’s Kara Swisher, Altman admitted the bot will sometimes make things up and present users with misinformation.
The ability of Large Language Models (LLMs) like ChatGPT to make things up, which is also known as hallucination, is a problem that AI researchers have been vocal about. In a study by Microsoft researchers to accompany the release of GPT-4, they wrote that the chatbot has trouble knowing when it is confident or just guessing, makes up facts that aren’t in its training data, has no way to verify if its output is consistent with its training data, and inherits biases and prejudices in the training data.
“These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations,” Starr wrote in his order. “Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth).”
Starr attached a “Mandatory Certificate Regarding Generative Artificial Intelligence” that attorneys need to sign when appearing in his court. “I further certify that no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence—including quotations, citations, paraphrased assertions, and legal analysis—will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court,” the contract states.
Outside of court, AI has already been proven to spread misinformation. Last week, dozens of verified accounts on Twitter posted about an explosion near the Pentagon alongside an AI-generated image. There was also a Reddit trend in March where people would create fake historical events using AI, such as “The 2001 Great Cascadia 9.1 Earthquake & Tsunami.”
Ars Frontiers recap: What happens to developers when AI can code?
Our second AI panel of the day, featuring Georgetown University's Drew Lohn (center) and Luta Security CEO Katie Moussouris (right). Skip to 3:01:12 if the link doesn't take you directly there. Click here for a transcript of the session.
The final panel of the day at our Frontiers conference this year was hosted by me—though it was going to be tough to follow Benj's panel because I didn't have a cute intro planned. The topic we were covering was what might happen to developers when generative AI gets good enough to consistently create good code—and, fortunately, our panelists didn't think we had much to worry about. Not in the near term, at least.
Joined by Luta Security founder and CEO Katie Moussouris and Georgetown Senior Fellow Drew Lohn, the general consensus was that, although large language models can do some extremely impressive things, turning them loose to create production code is a terrible idea. While generative AI has indeed demonstrated the ability to create code, even cursory examination proves that today's large language models (LLMs) often do the same thing when coding that they do when spinning stories: they just make a whole bunch of stuff up. (The term of art here is "hallucination," but Ars AI expert Benj Edwards tends to prefer the term "confabulation" instead, as it more accurately reflects what it feels like the models are doing.)
So, while LLMs can be relied upon today to do simple things, like creating a regex, trusting them with your production code is way dicier.
CRM giant Salesforce's focus on margins sees sales growth slip
Salesforce has disappointed the markets as it failed to lift its revenue forecast and saw professional services projects stall.…
The “death of self-driving cars” has been greatly exaggerated

Enlarge / The latest iteration of Waymo's self-driving technology is based on the Jaguar I-PACE. (credit: Waymo)
Seven years ago, hype about self-driving cars was off the charts. It wasn’t just Tesla CEO Elon Musk—who has been making outlandish predictions about self-driving technology since 2015. In 2016, Ford set a goal to start selling cars without steering wheels by 2021. The same year, Lyft predicted that a majority of rides on its network would be autonomous by 2021.
None of that happened. Instead, the last few years have seen brutal consolidation. Uber sold off its self-driving project in 2020, and Lyft shut down its effort in 2021. Then, last October, Ford and Volkswagen announced they were shutting down their self-driving joint venture called Argo AI.
Today, a lot of people view self-driving technology as an expensive failure whose moment has passed. The Wall Street Journal’s Chris Mims argued in 2021 that self-driving cars “could be decades away.” Last year, Bloomberg’s Max Chafkin declared that “self-driving cars are going nowhere.”
Google veep calls out Microsoft's cloud software licensing 'tax'
Google is very publicly adding to the chorus of complaints about Microsoft's alleged restrictive cloud software licensing policies, claiming that unless the European Union formally tackles it, the industry and customers will suffer lasting damage.…
What to expect from AMD's June datacenter, AI shindig
Comment AMD is just weeks away from unveiling the next phase of its datacenter portfolio at a swish launch in San Francisco.…
UK.gov reboots ERP refresh with £934 million procurement
UK government has put £934 million ($1.15 million) on the table to update an ERP system in a cluster of Whitehall departments, a mega-project that was heavily criticized over lack of funding and business case.…
WTF is solid state active cooling? We’ve just seen it working on a mini PC
Computex A US upstart has developed a solid-state active cooling device not much bigger than an SD card that uses a variety of exotic technologies to suck heat out of small enclosed spaces.…
Amazon Ring, Alexa accused of every nightmare IoT security fail you can imagine
America's Federal Trade Commission has made Amazon a case study for every cautionary tale about how sloppily designed internet-of-things devices and associated services represent a risk to privacy – and made the cost of those actions, as alleged, a mere $30.8 million.…
Ukraine war blurs lines between cyber-crims and state-sponsored attackers
A change in the deployment of the RomCom malware strain has illustrated the blurring distinction between cyberattacks motivated by money and those fueled by geopolitics, in this case Russia's illegal invasion of Ukraine, according to Trend Micro analysts.…
NASA experts looked through 800 UFO sightings and found essentially nothing
Video Experts leading NASA's study on unidentified anomalous phenomena – what we now call UFOs – has studied 800 unclassified events recorded over 27 years, and found that only two to five percent of cases are truly unexplainable.…
Foxconn is ecstatic you're all going gaga for AI servers
Chip designers like Nvidia aren't the only companies riding high on the AI hype train. Foxconn chairman Young-Way Liu Wednesday forecast surging server sales over the next year as adoption of large language models and other powerful AI systems – needed to train and run that tech – grows.…
Dark Pink cyber-spies add info stealers to their arsenal, notch up more victims
Dark Pink, a suspected nation-state-sponsored cyber-espionage group, has expanded its list of targeted organizations, both geographically and by sector, and has carried out at least two attacks since the beginning of the year.…
Get ready Snowflakes, Azure AI is coming for you with one click
Microsoft is looking to make its Azure cloud is the place for enterprises to run their AI and machine learning workloads.…
Feds, you'll need a warrant for that cellphone border search
A federal district judge has ruled that authorities must obtain a warrant to search an American citizen's cellphone at the border, barring exigent circumstances.…
6 monitor and TV innovations remind us that trade shows still exist

Enlarge / Samsung Display imagines its unfurling screen embodying future portable monitors. (credit: Samsung Display)
Believe it or not, technology trade shows are a thing in 2023. There are huge exceptions, like the game industry's previously annual E3 conference. But others, like the CES consumer tech show in Las Vegas in January and the Computex computing show in Taipei this week, are still kicking and offering peeks at intriguing consumer monitor and TV tech.No one knows what the future of tech shows holds. The pricey, flashy E3 show, for example, was declining for years before its last in-person show in 2019. Other trade shows are enduring notable decline in exhibitor numbers, in-show announcements, and attendee numbers.
This May, however, remained a time for tech trade shows. Computex started Tuesday, and The Society for Information Display (SID) held Display Week 2023 in Los Angeles last week.
As a tech reporter, the fun part of trade shows isn't racking up steps or spotting slivers of time to eat and sleep. It's checking out interesting products, features, and concepts that customers will soon see. It feels somewhat odd to say in this post-pandemic world, but May was actually an interesting time for trade show displays.
Eating disorder non-profit pulls chatbot for emitting 'harmful advice'
The National Eating Disorder Association (NEDA) has taken down its Tessa chatbot for giving out bad advice to people.…