September 30, 2024

By Santiago Aznarez

How to use Sapiens to Improve AI generated human images

Computers are more powerful than ever before. This means we can do things with AI that we couldn’t do in the past. But, these new AI models need a lot of data to learn. This appetite for data has been successfully addressed in natural language processing (NLP) by self-supervised pretraining. The solutions are easy to … Continued

Building a RAG-Based Chatbot with Azure’s Prompt Flow

It’s undeniable that one of the fastest-growing fields of artificial intelligence in recent years is Generative AI, particularly in the field of natural language processing. The number of systems and processes that can benefit from the use of Large Language Models (LLMs) is enormous, ranging from applications like chatbots and content generation to summarization tools, … Continued

Mastering Automatic License Plate Recognition in Wild Environments

Introduction Automatic License Plate Recognition (ALPR) refers to the task of accurately extracting license plate information from a variety of visual sources, which can range from high-resolution still images to real-time video streams from surveillance cameras.   Applications of ALPR are broad and impactful. ALPR systems are used for identifying stolen vehicles, tracking suspects, and … Continued

Exploring Oracle AI Vector Search: Beyond Vector Databases

This blog post explores Oracle AI Vector Search, a new feature that introduces vector capabilities to the Oracle Database. Vector embeddings are a powerful tool for tasks like semantic search and Retrieval-Augmented Generation (RAG). We’ll delve into the creation, storage and search functionalities offered by Oracle AI Vector Search. Providing a practical guide for developers … Continued

Exploring RetNet: The Evolution of Transformers

Since 2017, transformers have demonstrated their superiority in performance and computational efficiency, surpassing recurrent neural networks (RNNs). The attention mechanism introduced in the paper ‘Attention is All You Need’ and their ability to parallelize training—a feat traditional RNNs struggled with—attribute this superiority. However, transformers come with a challenge: the memory and inference costs associated with … Continued

Difference between Gemma and Gemini

Introduction Google -particularly its DeepMind squad- has launched a set of lightweights models, called Gemma, the same that were involved in Gemini creation. It is available in two sizes, 2B and 7B. It comes with an outstanding Responsible Generative AI Toolkit, an SFT for several frameworks and ready-to-use libs and colabs. Pre-trained models or customized … Continued

Model Merging: Combining Different Fine-Tuned LLMs

Model Merging 1. Introduction Model composition is a well-known problem in the Machine Learning community. Its aim is to extend the capabilities of a model, without forgetting what it already knows. Let’s consider a situation in which we have a good performing model in a certain task (e.g. Text-to-SQL) but it was fine-tuned for that … Continued

AI in Space: Combining Machine Learning with Satellite Imagery for boosting Agriculture

Introduction In the area of agricultural research, utilizing the power of satellite imagery has become a game-changer. These technologies provide valuable information that enables farmers, researchers, and industry professionals to make informed decisions, optimize resources, and maximize crop yields. However, working with satellite data has its own set of challenges and intricacies that we must … Continued

Prototype your ML project without a single line of code with Azure

Introduction Making a prototype of a Machine Learning solution is crucial, it acts as a cornerstone for effective problem-solving. Because it lets you quickly test and validate ideas, which is key in a field such as Machine Learning, characterized by complex algorithms and unpredictable data patterns. Through this process, teams can quickly assess the viability … Continued

Next Page »

Document