AI in Production– How to Reduce Downtime from 90 Minutes to Just a Few? What This Podcast Episode Is About

/ 26.02.2026 News

In the latest episode of the “AI in Production” podcast, we discuss one of the most practical applications of artificial intelligence in industry — the use of a Large Language Model (LLM) in maintenance operations.

In this article, we highlight the key takeaways, but in the full episode you’ll learn:

  • what the data management process looked like before the implementation,
  • how security and confidentiality requirements were addressed,
  • how the solution was integrated with the client’s existing infrastructure,
  • how employees reacted to the new system,
  • and where this solution is headed next (including AR integration and real-time language translation).

What Is the “AI in Production” Podcast About?

The “AI in Production” podcast is a series of conversations about real-world artificial intelligence implementations in industrial environments. We don’t focus on theory or buzzwords — instead, we explore:

  • the specific challenges faced by manufacturing companies,
  • real-life AI implementations,
  • organizational transformation hurdles,
  • measurable business outcomes,
  • and what truly works… and what doesn’t.

We speak with experts who lead AI projects in factories — from initial concept and IT infrastructure integration to tangible operational results.

This particular episode is especially compelling because it addresses one of the most expensive challenges in any manufacturing facility: machine downtime.

What Is This Episode About?

In this episode, we discuss the implementation of a system based on a Large Language Model (LLM) within the maintenance department of a pharmaceutical company. Our guest is Łukasz Borzęcki, CEO/CTO at VM.PL.

Łukasz walks us through a range of challenges the pharmaceutical company was facing, including:

  • frequent machine failures,
  • downtime lasting over 1.5 hours,
  • fragmented documentation (PDF files, manuals, ticketing systems),
  • and knowledge distributed across experienced employees rather than centralized systems.

An additional complication was the nature of the pharmaceutical production environment — clean rooms, strict safety procedures, and the physical distance between documentation storage and the production line.

What Actually Changed in Maintenance Operations?

The key factor wasn’t the LLM technology itself. The key was how it was implemented and applied.

The team designed the solution using a RAG (Retrieval-Augmented Generation) architecture, combined with a vector knowledge database and a language model operating within the client’s closed infrastructure.

The system was trained exclusively on the company’s internal data:

  • machine manuals,
  • failure history,
  • RCA (Root Cause Analysis) meeting records,
  • the ticketing system,
  • health and safety procedures.

The model does not use the internet and does not generate responses beyond the scope of company data. It answers strictly based on organizational knowledge and provides references to the sources it uses.

In practice, this means that a production line operator can enter an error code or describe a symptom, and the system will:

  • analyze similar historical cases,
  • search through documentation,
  • provide a step-by-step solution,
  • indicate the source on which the response is based.

As a result, downtime was reduced from over 90 minutes to just several minutes.

The Biggest Challenge? People, Not Technology

An important theme that emerged in the conversation was employees’ concerns about the new system.

The natural questions were:

  1. Will AI replace me?
  2. Will my role become unnecessary?

Łukasz addressed these concerns directly. He presented a practical perspective showing that introducing AI into the organization:

  • enables operators to resolve simpler issues independently,
  • gives technicians more time to focus on preventive activities,
  • shifts the organization from a reactive approach to a predictive maintenance model.

Interestingly – The System Helps Plan for the Future

The LLM doesn’t just provide guidance on how to fix a failure.

Based on historical data, it can also identify:

  • which components fail most frequently,
  • whether inventory levels should be increased,
  • which supplier’s parts are less failure-prone,
  • where to focus improvement efforts in the upcoming quarter.

This is no longer just an operational support tool — it becomes part of strategic production management.

Why Is This Episode Worth Listening To?

Because it’s a conversation about real life in manufacturing.

In this episode, we walk through what an AI implementation in a production company looks like — step by step. No shortcuts. No marketing buzzwords. Instead, we focus on real examples from the maintenance department.

Łukasz Borzęcki shares hands-on experience gained in a live production environment. He explains:

  • how to convince a team to adopt a new solution,
  • how to approach data security,
  • how to integrate the system with existing infrastructure,
  • and what it takes to ensure the project doesn’t stop at the “proof of concept” stage.

This episode is for professionals responsible for production, maintenance, or technology development in manufacturing environments. For those looking to reduce machine downtime and understand how LLMs can be applied in industry in practice.

If you’re interested in how AI in manufacturing can genuinely support people — rather than replace them — this conversation will provide clear, practical answers.

Category: News


Wiktoria Łabaza Junior Content Writer I create content about artificial intelligence that highlights its practical use in VM.PL technology projects. On the blog, I share knowledge about AI-driven solutions and their implementation across various industries.