September 30, 2024

By Santiago Aznarez

Teaching Diffusion Models Specific Concepts

1. Introduction 1.1 Motivation Have you ever found yourself tirelessly scouring the internet for that one image that perfectly conveys your creative vision, only to come up short? Perhaps you’re a content creator on the quest for visuals that align seamlessly with your ideas. But hours of web surfing yield little more than frustration. Imagine … Continued

Diffusion models for video generation

Introduction Diffusion models have earned a special place in the AI visual Content Generation landscape, dethroning GANs and positioning themselves as the go-to approach when creating realistic content. As technologies like LoRAs and Latent Consistency Models arrived, these models started to be less restrictive in terms of time and computing resources, and new possibilities and … Continued

Adapting Agile Methodologies in Machine Learning: A good match or a fine-tune operation?

Machine learning and data projects, in contrast with traditional software engineering projects, are governed by a resource characterized by unpredictable patterns, distributions, and biases – namely, the data itself. For a successful implementation of projects that give value through inference over datasets, we often consider: Does the collaboration between Agile methodologies and Machine Learning projects … Continued

Edge computing: deploying AI models into multiple edge devices

Imagine you have to develop a computer vision application that must run in an industrial or agricultural environment with limited connectivity. Think, for example, of an application that detects plagues using a camera mounted on agricultural machinery.  Now imagine an application that monitors some machine in an industrial plant and needs to raise a real-time … Continued

Finetuning LLMs: Enhancing Product Descriptions Efficiently

Going on with our blog series on Large Language Models (LLMs) today we will talk about a successful finetuning case. While a well-crafted prompt can get the job done in many scenarios, there are times where it might not be enough. Finetuning steps in when we need results of higher quality compared to using prompts … Continued

An Introduction to Diffusion Models and Stable Diffusion

Introduction Imagine a world where creativity transcends the limitations of brushes, clay, and canvas. It was in 2022, at Colorado’s State Fair art competition, that a groundbreaking entry defied the conventional boundaries of artistic creation. Jason M. Allen’s masterpiece, “Théâtre D’opéra Spatial” won the first prize and defied convention. Not through traditional means, but with … Continued

From zero to NeRF: what to expect data-wise on a NeRF project

In this follow-up to our initial exploration of NeRF (Neural Radiance Fields), we’ll dive deeper into the essential aspects of data preparation and management for utilizing this innovative technology. Additionally, we’ll highlight a selection of practical tools that can aid you in your NeRF journey, enabling you to better understand and apply its capabilities. Recap … Continued

Pandas & Polars: is it time to migrate? definitely maybe 🤔

Where are we? 🗺️ If you are a data scientist, machine learning engineer, or otherwise involved in a data-driven project, it is highly likely that you have used pandas to clean, sanitize, filter, and prepare the data to be used as input for your chosen methods, models, or algorithms. In this early step, which usually … Continued

Deploying Llama2 with NVIDIA Triton Inference Server

  Intro NVIDIA Triton Inference Server is an open-source inference serving software that enables model deployment standardization in a fast and scalable manner, on both CPU and GPU. It provides developers the freedom to choose the right framework for their projects without impacting production deployment. It also helps developers deliver high-performance inference across cloud, on-premise, … Continued

Enhancing Llama2 Conversations with NeMo Guardrails: A Practical Guide

Intro NeMo Guardrails is an open-source toolkit developed by NVIDIA for seamlessly incorporating customizable guardrails to LLM-based conversational systems. It allows users to control the output of an Large Language Model (LLM), for instance avoiding political topics, following a predefined conversation flow, validating output content, etc. Inspired by SelfCheckGPT, NeMo heavily relies on the utilization … Continued

« Previous PageNext Page »

Document