Another Elsevier paper with obvious AI-written text.
“In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model.”
Thinking to myself this morning “Threads can’t be as bad as I recall…”
First post in my feed: Instagram influencer with a cup of coffee watching the sunrise from her balcony in a $1000/night hotel built with slave labor in a country where women can’t drive, telling me about how she has discovered that the secret to life is learning to savor the little things.
No, lady, apparently the secret to life is to be born rich, white, beautiful, and oblivious.
Google Scholar is a flaming piece of shit. Why do we use it for scholarly evaluation?
Here, it misattributes 4873 citations — a decent count for entire research career — from Keeling and Rohani's landmark book, giving them instead to the authors who wrote a book review of the book.
1. “Imagine we land a space probe on one of Jupiters’ moons, take up a sample of material, and find it is full of organic molecules. How can we tell whether those molecules are just randomly assembled goo or the outcome of some evolutionary process taking place on the planet?”
If I were to set up a wordpress blog to write about bullshit, science, big tech, large language models, and all that, what would you think I should title it?
It's pure coincidence that Elon Musk just happened to erase the entire record of how twitter was used to organize and share photos and video from Arab Spring, right?
This afternoon James Zou directed me to a recent pilot study from his group in which they looked at the performance of seven different GPT-detectors that are sometimes used to flag cheating in educational settings.
They found that these detectors commonly misclassify text from non-native English speakers as being written by an AI. A primary driver appears to be the lower perplexity (exponent of model's loss) of such text.
It sounds trivial and obvious, but are you reading error bars correctly?
Do you know whether you're looking at standard errors (measures of inferential uncertainty), standard deviations (measures of spread of individual observations), or 95% confidence intervals around the mean?
And are your intuitions about what each of these mean correct?
Here's a nice primer, refresher, or teaching article.