March 11, 2024

By Ignacio Aristimuño

January 30, 2024

By Joaquin Bengochea

Prototype your ML project without a single line of code with Azure

Introduction Making a prototype of a Machine Learning solution is crucial, it acts as a cornerstone for effective problem-solving. Because it lets you quickly test and validate ideas, which is key in a field such as Machine Learning, characterized by complex algorithms and unpredictable data patterns. Through this process, teams can quickly assess the viability … Continued

Teaching Diffusion Models Specific Concepts

1. Introduction 1.1 Motivation Have you ever found yourself tirelessly scouring the internet for that one image that perfectly conveys your creative vision, only to come up short? Perhaps you’re a content creator on the quest for visuals that align seamlessly with your ideas. But hours of web surfing yield little more than frustration. Imagine … Continued

Diffusion models for video generation

Introduction Diffusion models have earned a special place in the AI visual Content Generation landscape, dethroning GANs and positioning themselves as the go-to approach when creating realistic content. As technologies like LoRAs and Latent Consistency Models arrived, these models started to be less restrictive in terms of time and computing resources, and new possibilities and … Continued

Adapting Agile Methodologies in Machine Learning: A good match or a fine-tune operation?

Machine learning and data projects, in contrast with traditional software engineering projects, are governed by a resource characterized by unpredictable patterns, distributions, and biases – namely, the data itself. For a successful implementation of projects that give value through inference over datasets, we often consider: Does the collaboration between Agile methodologies and Machine Learning projects … Continued

Edge computing: deploying AI models into multiple edge devices

Imagine you have to develop a computer vision application that must run in an industrial or agricultural environment with limited connectivity. Think, for example, of an application that detects plagues using a camera mounted on agricultural machinery.  Now imagine an application that monitors some machine in an industrial plant and needs to raise a real-time … Continued

Finetuning LLMs: Enhancing Product Descriptions Efficiently

Going on with our blog series on Large Language Models (LLMs) today we will talk about a successful finetuning case. While a well-crafted prompt can get the job done in many scenarios, there are times where it might not be enough. Finetuning steps in when we need results of higher quality compared to using prompts … Continued

An Introduction to Diffusion Models and Stable Diffusion

Introduction Imagine a world where creativity transcends the limitations of brushes, clay, and canvas. It was in 2022, at Colorado’s State Fair art competition, that a groundbreaking entry defied the conventional boundaries of artistic creation. Jason M. Allen’s masterpiece, “Théâtre D’opéra Spatial” won the first prize and defied convention. Not through traditional means, but with … Continued

From zero to NeRF: what to expect data-wise on a NeRF project

In this follow-up to our initial exploration of NeRF (Neural Radiance Fields), we’ll dive deeper into the essential aspects of data preparation and management for utilizing this innovative technology. Additionally, we’ll highlight a selection of practical tools that can aid you in your NeRF journey, enabling you to better understand and apply its capabilities. Recap … Continued

Pandas & Polars: is it time to migrate? definitely maybe 🤔

Where are we? 🗺️ If you are a data scientist, machine learning engineer, or otherwise involved in a data-driven project, it is highly likely that you have used pandas to clean, sanitize, filter, and prepare the data to be used as input for your chosen methods, models, or algorithms. In this early step, which usually … Continued

Deploying Llama2 with NVIDIA Triton Inference Server

  Intro NVIDIA Triton Inference Server is an open-source inference serving software that enables model deployment standardization in a fast and scalable manner, on both CPU and GPU. It provides developers the freedom to choose the right framework for their projects without impacting production deployment. It also helps developers deliver high-performance inference across cloud, on-premise, … Continued

Next Page »

Document