Archive for the 'Artificial Intelligence' Category

How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!


Residual Block

Residual block




[1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016

[2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997.

[3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016.

[4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.

[5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018

Read Full Post »

Since the beginning of AI in the 1950s and until the 1980s, symbolic AI approaches have dominated the field. These approaches, also known as expert systems, used mathematical symbols to represent objects and the relationship between them, in order to depict the extensive knowledge bases built by humans.
The opposite of the symbolic AI paradigm is named connectionism, which is behind the machine learning approaches of today

Read Full Post »

The successes that deep learning systems have achieved in the last decade in all kinds of domains are unquestionable. Self-driving cars, skin cancer diagnostics, movie and song recommendations, language translation, automatic video surveillance, digital assistants represent just a few examples of the ongoing revolution that affects or is going to disrupt soon our everyday life.
But all that glitters is not gold…

Read the full post on the Amethix Technologies blog

Read Full Post »

In this episode I speak about how important reproducible machine learning pipelines are.
When you are collaborating with diverse teams, several tasks will be distributed among different individuals. Everyone will have good reasons to change parts of your pipeline, leading to confusion and definitely a number of options that soon explode.
In all those cases, tracking data and code is extremely helpful to build models that are reproducible anytime, anywhere.
Listen to the podcast and learn how.


Read Full Post »

Have you ever wanted to get an estimate of the uncertainty of your neural network? Clearly Bayesian modelling provides a solid framework to estimate uncertainty by design. However, there are many realistic cases in which Bayesian sampling is not really an option and ensemble models can play a role.

In this episode I describe a simple yet effective way to estimate uncertainty, without changing your neural network’s architecture nor your machine learning pipeline at all.

The post with mathematical background and sample source code is published here.

Read Full Post »

The success of a machine learning model depends on several factors and events. True generalization to data that the model has never seen before is more a chimera than a reality. But under specific conditions a well trained machine learning model can generalize well and perform with testing accuracy that is similar to the one performed during training.

In this episode I explain when and why machine learning models fail from training to testing datasets.

Read Full Post »

It's always good to put in perspective all the findings in AI, in order to clear some of the most common misunderstandings and promises.
In this episode I make a list of some of the most misleading statements about what artificial intelligence can achieve in the near future.

Read Full Post »

In this episode - which I advise to consume at night, in a quite place - I speak about private machine learning and blockchain, while I sip a cup of coffee in my home office.
There are several reasons why I believe we should start thinking about private machine learning...
It doesn't really matter what approach becomes successful and gets adopted, as long as it makes private machine learning possible. If people own their data, they should also own the by-product of such data.

Decentralized machine learning makes this scenario possible.

Read Full Post »

Today I am having a conversation with Filip Piękniewski, researcher working on computer vision and AI at Koh Young Research America.
His adventure with AI started in the 90s and since then a long list of experiences at the intersection of computer science and physics, led him to the conclusion that deep learning might not be sufficient nor appropriate to solve the problem of intelligence, specifically artificial intelligence.  
I read some of his publications and got familiar with some of his ideas. Honestly, I have been attracted by the fact that Filip does not buy the hype around AI and deep learning in particular.
He doesn’t seem to share the vision of folks like Elon Musk who claimed that we are going to see an exponential improvement in self driving cars among other things (he actually said that before a Tesla drove over a pedestrian).

Read Full Post »

In this episode I continue the conversation from the previous one, about failing machine learning models.

When data scientists have access to the distributions of training and testing datasets it becomes relatively easy to assess if a model will perform equally on both datasets. What happens with private datasets, where no access to the data can be granted?

At fitchain we might have an answer to this fundamental problem.


Read Full Post »