November 30, 2020

NeurIPS official meetup 2020

By Paula Martinez

At Marvik, we are proud to announce that we were selected to organize an official NeurIPS 2020 meetup.

Neural Information Processing Systems annual meeting -aka NeurIPS- is one of the most prestigious Machine Learning conferences.

The meetup will be hosted virtually during NeurIPS, from December 7th through December 10th. Stay tuned, more details and confirmed speakers coming soon!


NeurIPS Montevideo meetup registration –> Inteligencia Artificial Montevideo meetup

What are NeurIPS meetups? –> NeurIPS blog post

Full NeurIPS schedule –> main conference schedule



Day 1

When? Monday, December 7 at 6:30 pm

Register: Inteligencia Artificial Montevideo meetup – day 1


  • 6:30 pm: Welcome & intro
  • 6:45 pm: How machines learn and how artificial neural networks work by Lesly Zerna
  • 7:30 pm: Machine learning projects lifecycle by Paula Martinez


In the first talk we will review basic concepts of mathematics and programming relevant to understand what machine learning and artificial neural networks are, as well as talk about some applications that are powered by artificial intelligence.


The second talk will cover different fields of machine learning applications. Also, how to select the first projects in AI. and the right team. Machine learning projects life cycle will be covered as well. Ideal for managers that wants to discover how to leverage and apply this technology.


About the speakers



Day 2

When? Thursday, December 10 at 6:30 pm

Register: Inteligencia Artificial Montevideo meetup – day 2


  • 6:30 pm: Welcome & intro
  • 6:45 pm: Edge Computing by Rodrigo Beceiro
  • 7:30 pm: TBD – Embedded systems in deep learning and video analytics by David Cardozo


Edge Computing: This talk covers lessons learnt from industry applications of machine learning edge computing use cases. It covers the hardware, frameworks and libraries used in real edge computing applications including lower end devices such as Raspberry Pi to latest NVIDIA Jetson and Google Coral. We will talk about neural network optimization required for fast inferences and common tools such as TensorRT, Docker and AWS Sagemaker for this. Pruning, inference in lower precision weights, model distillation and lottery ticket hypothesis will be covered. IoT solutions will also be covered, including AWS GreenGrass and Azure’s IoT Hub in real applications.


In the second talk we will explore the different options for deploying deep learning models on embedded systems (tflite, tensorrt), what are their challenges and the hardware that is being created to be able to do video analytics and processing at the edge.


About the speakers