IBM AI Related News

  • How to begin using IBM Granite 3.1 workshop
    by IBM TechXchange Community on 2025-04-03 at 02:21

    This session will guide you through the basics of setting up an environment to begin utilizing IBM Granite for your everyday use-cases. We will walk through how to connect to Replicate and connect to a hosted model and run the 8B parameter Granite 3.1 model for free in your notebook. We will finish by trying out Granite in a few simple use cases.In this webinar, you can:Learn to connect your notebook to a hosted IBM Granite modelLearn the basics on leveraging open source LLMsSet up an environment to begin creating and testing your own IBM Granite use-cases for free

  • Build a local AI co-pilot using IBM Granite, Ollama, and Continue
    by IBM TechXchange Community on 2025-04-03 at 02:21

    Discover how to adopt AI co-pilot tools in an enterprise setting with open source software and IBM Granite. Gabe Goodhart is a software architect and developer with a background in artificial intelligence, cloud software architecture, and open source. He has spent the last ten years at IBM driving the strategic shift towards a unified AI platform across business units and brands. He loves abstract interfaces, good logging frameworks, and one-off bash scripts that save time.In this webinar, he will cover:Setting up a free co-pilot tool in Visual Studio Code with business friendly licensingFully local setups with no external services used for business sensitive dataEnabling local LLMs with Ollama for additional use cases

  • Build an agentic RAG (Retrieval Augmented Generation) system with Granite 3.1 LLM on a laptop
    by IBM TechXchange Community on 2025-04-03 at 02:21

    This tutorial will show you an implementation of Agentic Retrieval-Augmented Generation (RAG) that you can start using today. It can perform multi-step workflows like combining document search and web search to perform complex tasks like business research, feature comparison, news retrieval based on projects, personal knowledge management, and more.It will be based off of the following tutorial: https://ibm.co/3YcJCA0Attendees will walk away with a better understanding of AI agents and some ideas on the best way to use them with smaller, lightweight models like IBM Granite.

  • Modernizing Apps: 3 Steps with Automation & Generative AI
    by IBM Technology on 2025-04-02 at 11:15

    Ready to become a certified watsonx Mainframe Modernization Architect? Register now and use code IBMTechYT20 for 20% off of your exam → https://ibm.biz/BdndCALearn more about Application Modernization here → https://ibm.biz/BdndCuTechnical debt and cloud migration complexities slowing you down? 🚀 Isabella Rocha shares how automation and generative AI tools can simplify app modernization in 3 steps: understand dependencies, plan with precision, and execute efficiently. 💡 Adopt Kubernetes with ease and reduce risks using scalable cloud platforms. ☁️AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → https://ibm.biz/BdndCL#cloudmigration #generativeai #automation

  • New prompting techniques tackle model bloat – IBM
    on 2025-04-02 at 01:11

    As reasoning models like OpenAI’s o1, DeepSeek-R1 and Google’s Gemini 2.5 compete to top AI intelligence benchmarks, enterprises looking to integrate AI are becoming increasingly wary of something called “model bloat”—the phenomenon whereby models become unnecessarily large or complex, pushing up computational costs and model training time and decreasing the speed at which they can provide the responses enterprises need. OpenAI’s o1 and DeepSeek-R1 use chain of thought (CoT) reasoning to break complex problems into steps, achieving unprecedented performance and greater accuracy than prior models. But CoT also demands substantial computational resources during inference, leading to lengthy outputs and higher latency, says Volkmar Uhlig, a VP and AI Infrastructure Portfolio Lead at IBM, in an interview with IBM Think. Enter a new class of prompting techniques, described in various new papers, ranging from atom of thought (AoT) to chain of draft (CoD), seeking to increase the efficiency and accuracy of CoT by helping models solve problems more quickly—thereby cutting down on costs and latency. AI scientist and startup founder Lance Elliott sees the new offshoots of chain of thought as variations in a prompt engineer’s toolkit. “Your typical home handiwork toolkit might have a regular hammer—that would be CoT,” he tells IBM Think. “AoT would be akin to using a specialized hammer used for situations involving cutting and adjusting drywall. You could use a regular hammer for drywall work, but it would be advisable to use a drywall hammer if you had one and knew how to use it properly.” Vyoma Gajjar, an AI Technical Solution Architect at IBM, sees potential in these new CoT cousins, especially for enterprises “looking for more cost-efficient ways to prompt small models to get accurate answers for their specific use cases,” she says.

Verified by MonsterInsights