But could ChatGPT and similar tools go beyond the kind of work done with the late Lennon's voice? Could they emulate McCarthy’s story-telling, his seamless and singular weaving of exhilarating lyricism and sublime menace? It seems unlikely, going by recent accounts of what the bots churn out when asked to come up with fiction.There was obvious lifting from real books, a concern previously raised by analysts reviewing how the bots come up with scientific or medical output. Meanwhile, reviewers listed “weak endings”, a “lack of a distinctive voice” and “inconsistencies” as among the defects of the AI-generated 'literature'. It sounds like a long way off one of the late McCarthy’s more celebrated passages - the ‘coin toss’ scene in "No Country For Old Men", the watch-through-your-fingers cinema version of which has been doing the rounds on the internet since the 89-year-old’s death. In words that could be used regarding some chatbot output, Anton Chigurh, the book’s nerveless, psychotic hit-man, sneers ominously at a hapless gas station operator, telling him: "You don’t know what you’re talking about, do you?"
In what may calm fears of student papers and school homework being outsourced to artificial intelligence, some experts now believe there are "telltale signs" that make it relatively easy to distinguish human writing from that of chatbots. Scientists at the University of Kansas say they have come up with "a tool to identify AI-generated academic science writing" with more than 99% accuracy. "We tried hard to create an accessible method so that with little guidance, even high school students could build an AI detector for different types of writing," said Heather Desaire, whose team’s work at the university was published in the journal Cell Reports Physical Science on June 7. They said their model "aced a 100% accuracy rate at weeding out AI-generated full perspective articles from those written by humans," with the accuracy dropping to 92% when it came to a paragraph-by-paragraph assessment.
A study taking in thousands of mammograms saw artificial intelligence outperform the standard clinical risk model for predicting the five-year risk for breast cancer, according to a research paper published on June 6 in the medical journal Radiology. The research was based on screening mammograms from 2016, with the team then using five AI-supported algorithms to generate risk scores, which were compared to each other and to a standard risk prediction system known as the Breast Cancer Surveillance Consortium (BCSC) model. BCSC is used to generate a risk score based on information such as a patient’s age, family history of the disease, whether she has given birth and whether she has dense breasts. “All five AI algorithms performed better than the BCSC risk model for predicting breast cancer risk at 0 to 5 years,” the report found, adding that such a “strong predictive performance” suggests AI is “identifying both missed cancers and breast tissue features that help predict future cancer development.”
In what could be a further sign of job losses to come due to the rise in AI-powered robotics, a Dutch-Swiss team has come up with a tomato-picking robot with the help of ChatGPT. Researchers at Delft University of Technology (TU Delft) and the Lausanne-based technical university EPFL said the artificial intelligence chatbot offered a range of ideas and suggestions during the process, including what kind of robot they should go about putting together. “We wanted ChatGPT to design not just a robot, but one that is actually useful,” said Cosimio Della Santina of TU Delft. The team in the end chose to home in on food supply, saying that “as they chatted with ChatGPT, they came up with the idea of creating a tomato-harvesting robot.”
Elon Musk's Twitter is facing an advertising exodus. Learn about the challenges new chief executive Linda Yaccarino will have to navigate and more, here.
Smooth talkers, snake oil peddlers and bombastic demagogues have been taking people in since the dawn of time. And then there's the likelihood that people are more responsive when spoken to with confidence, empathy and enthusiasm than they are when hearing a voice that sounds indifferent or curt or even matter-of-fact - though proponents of tough talking and straight shooting might see this as no more than a truism. Either way, it is no surprise that some of these dynamics are filtering through to robot-people interactions. As it turns out, the more “human” and “charismatic” and artificial intelligence (AI) device or robot sounds, the more receptive the human audience.
Scientists suffering insults and mass-spam are abandoning Twitter for alternative social networks as hostile climate-change denialism surges on the platform following Elon Musk’s takeover.