MedGemma: Our most capable open models for health AI development
Generative AI
Key advancements include in-context learning, which enables coherent text generation from prompts, and reinforcement learning from human feedback (RLHF), which fine-tunes models based on human responses. Techniques like prompt engineering have also enhanced LLM performance in tasks such as question answering and conversational interactions, marking a significant leap in natural language processing. Pre-trained language models…
The AI revolution is racing beyond chatbots to autonomous agents that act, decide, and interface with internal systems. Unlike traditional software, AI agents can be manipulated through language, making them vulnerable to attacks like prompt injection and they also introduce new security risks like excessive agency. Join us for an exclusive deep dive with Sourabh…
At the Generative AI Summit Silicon Valley 2025, Vishal Sarin, Founder, President & CEO of Sagence AI, sat down with Tim Mitchell, Business Line Lead, Technology at the AI Accelerator Institute, to explore one of the most urgent challenges in generative AI: its staggering power demands. In this interview, Vishal shares insights from his talk…
Catch up on every session from the AIAI Silicon Valley with sessions across 3 co-located summit featuring the likes of Anthropic, Open AI, Meta and many more.
The increasing role of foundational models in boosting AI agents has fueled the growth by simplifying multi-step tasks beyond traditional AI’s capabilities. Foundational models, such as large language models (LLMs), provide AI agents with advanced reasoning, planning, and language understanding capabilities. This enables agents to autonomously break down, interpret, and execute complex tasks that previously…
However, generative AI models, despite their transformative potential, entail serious privacy and security risks due to the vast amounts of data involved and the opacity of their development. Moreover, there is widespread concern about models hallucinating—inventing false or misleading information when faced with insufficient data. These roadblocks are preventing the smooth implementation of generative AI…
However, despite their impressive human-like intelligence, they are far from infallible, often producing incorrect, misleading, or even harmful outputs. This necessitates human oversight to ensure their safety and reliability. This article explores the role of data labeling for LLMs and how it bridges the gap between the potential of Gen AI models and their reliability…