In their quest to discover effective new medicines, scientists search for drug-like molecules that can attach to disease-causing proteins and change their
Conclusion (~1,700 words).
All backed up by over 200 references (~6,500 words).
We must stop crediting the wrong people for inventions made by others.
Instead let s heed the recent call in the journal
Nature: Let 2020 be the year in which we value those who ensure that
science is self-correcting [SV20].
Like those who know me can testify, finding and citing original sources of scientific and technological innovations is important to me, whether they are mine or other people s [DL1][DL2][HIN][NASC1-9]. The present page is offered as a resource for computer scientists who share this inclination.
By grounding research in its true intellectual foundations and crediting the original inventors,
Credit: World Scientific
Have you ever wondered if it s possible to learn all there is to know about machine learning and deep learning from a book?
Machine Learning A Journey to Deep Learning, with Exercises and Answers is designed to give the self-taught student a solid foundation in machine learning with step-by-step solutions to the formative exercises and many concrete examples. By going through this text, readers should become able to apply and understand machine learning algorithms as well as create new ones.
The main parts of the book address linear and nonlinear regression, supervised learning, learning theory, feature extraction and unsupervised learning. The statistical approach leads to the definition of regularization out of the example of regression. Building on regression, we develop the theory of perceptrons and logistic regression. The book investigates the relation between bias and variance as a consequence of a finite training sample set that is used in machine
Watch an AI robot walk with a broken leg, thanks to a brain that never stops learning
Dec. 21, 2020 , 9:00 AM
Watch the two simulated robots above, and you’ll notice a big difference. Even though both of their “brains” have evolved over 300 generations to allow them to walk, only one succeeds; the other falls flat on its back.
That’s because only the bot on the left has learned to adapt to new circumstances. Artificial intelligence (AI) often relies on so-called neural networks, algorithms inspired by the human brain. But unlike ours, AI brains usually don’t learn new things once they’ve been trained and deployed; they’re stuck with the same thinking they’re born with.
Dec. 18, 2020 , 3:45 PM
Somehow, even in a room full of loud conversations, our brains can focus on a single voice in something called the cocktail party effect. But the louder it gets or the older you are the harder it is to do. Now, researchers may have figured out how to fix that with a machine learning technique called the cone of silence.
Computer scientists trained a neural network, which roughly mimics the brain’s wiring, to locate and separate the voices of several people speaking in a room. The network did so in part by measuring how long it took for the sounds to hit a cluster of microphones in the room’s center.