est.social on üks paljudest sõltumatutest Mastodoni serveritest, mida saab fediversumis osalemiseks kasutada.
est.social on mõeldud Eestis üldkasutatavaks Mastodoni serveriks. est.social is meant to be a general use Mastodon server for Estonia.

Administraator:

Serveri statistika:

89
aktiivsed kasutajad

#neuralnetworks

4 postitusega3 osalejaga0 postitust täna

Physics Rediscovered: My trebuchet is bigger than yours
michaeldominik.substack.com/p/
news.ycombinator.com/item?id=4

Posting more so b/c of this:

Supersonic Projectile Exceeds Engineers Dreams: Supersonic Trebuchet
hackaday.com/2021/12/01/supers
news.ycombinator.com/item?id=2

In the YT video the young engineer optimizes the aim.
Modern neural networks are based on similar mathematical optimization principles (cost/loss functions)...

#engineering #optimization #trebuchet #NeuralNetworks #physics
en.wikipedia.org/wiki/Trebuchet

Yeah IME #chatgpt is unreliable when it comes to writing code for #neuralNetworks. Everything up to that in ML it can do okay, but I think neural network programming is just too esoteric. Too few examples in its training data.

Makes me wonder if I could amass a significant number of tech books on writing #neuralNetwork code in python, train— okay, fine-tune— a model on them and then ask it questions instead. Probably couldn't get it to write code for me but maybe it's just as well.

Top Python Machine Learning Libraries

When running a ML project in Python, a wide range of libraries come into play, each serving a distinct purpose within the ML pipeline. The typical stages of a ML project include data collection, preprocessing, model building, training, evaluation, and deployment. To efficiently navigate through these stages, it's crucial to leverage the right set of tools.

Read more:
ml-nn.eu/a1/50.html

ml-nn.euTop Python Machine Learning LibrariesMachine Learning & Neural Networks Blog

Interpretability in Machine Learning

As machine learning models become increasingly complex and integral to decision-making processes, the demand for interpretability has grown substantially. Interpretability refers to the degree to which a human can understand the cause of a decision made by a model. This concept is essential for building trust, ensuring ethical use, and enabling compliance with...

Read more:
ml-nn.eu/a1/47.html

ml-nn.euInterpretability in Machine LearningMachine Learning & Neural Networks Blog

The 2024 #NobelPrize in #Physics was awarded to John Hopfield and Geoffrey Hinton for the development of Artificial Neural Networks that led to #AI. 👉 nobelprize.org/prizes/physics/

I note that Hinton is the University of #Toronto professor who resigned from the #Google Board of Directors because of its AI policy. Also, the #NeuralNetworks research was initiated in Edinburgh, #Scotland by his late PhD advisor, Professor Christopher Longuet-Higgins who relocated from the #Chemistry Dept at Cambridge.

NobelPrize.orgThe Nobel Prize in Physics 2024The Nobel Prize in Physics 2024 was awarded to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks”