Designing for AI: Lessons from a bag of chocolate candy

Smart Enterprise, June 17, 2025

Smart Enterprise, June 17, 2025

How a playful classification system exposed deep truths about enterprise AI UX

What does colourful chocolate candy and enterprise software have in common? Quite a bit! Especially when artificial intelligence enters the picture.

At first glance, building a system to classify the color of chocolate buttons might sound like a playful experiment. But it quickly turned into a real-world simulation of what happens when intelligent systems interact with human expectations, and the surprising design challenges that emerge.

This article explores seven critical differences in designing AI vs. non-AI systems, based on hands-on experience, and links them to challenges faced by enterprise teams working with SAP, BTP, and intelligent automation today.

1. Uncertainty is a feature, not a flaw

Traditional software gives you predictable results. It either works or it doesn’t. AI, by contrast, works in probabilities. Our chocolate analyzer might say it’s 87% confident that the candy is green. That uncertainty, however small, affects how users experience and trust the system.

In enterprise systems, especially with AI-driven SAP BTP extensions, uncertainty is even more critical. An AI model suggesting optimizations or flagging anomalies must convey how confident it is and give users fallback options when confidence is low.

2. Feedback isn’t just feedback – it’s data!

In a non-AI system, user feedback improves usability. But in an AI system, it improves performance.

In our chocolate classifier, when users correct misclassifications, those corrections should be used to retrain the model. This feedback loop is critical for continuous learning.

Enterprise software must do the same. Whether it’s predictive maintenance or smart invoicing, the system should learn from user input and the UX should encourage that.

3. Errors are part of the design

Mistakes in traditional systems are bugs to fix. In AI systems, they’re expected…especially on edge cases.

AI systems are probabilistic and data-driven. You can’t always pinpoint exactly why a misclassification occurred (the “black box effect”). That requires a new approach to error handling in the interface: trust-building, transparency, and override mechanisms.

4. Designing for fairness is designing for function

Our system tended to misclassify similar-looking shades (e.g., brown vs. red). Why? The training data was imbalanced.

This exposed a key truth: bias can degrade usability, not just fairness. In enterprise AI, this is even more dangerous. Models trained on narrow data may behave inconsistently across regions, roles, or devices.

Pro tip: Design and test across diverse data sets and make bias detection part of the feedback loop.

5. Transparency builds trust

One of the most frequent user questions: “Why did the AI say this was red?”

To address this, we added tooltips showing how confident the model was and gave explanations based on color histograms.

In enterprise systems, trust in AI = adoption. Explainability becomes part of the UX. A user who can see why the system made a decision is far more likely to engage with it, even if the answer isn’t perfect.

6. Scalability means designing for change

AI systems can evolve….if the UX allows them to.

Our chocolate system scaled easily to new colors, but only because we had a built-in mechanism for collecting and labeling new data. If not, retraining would have been a nightmare.

In the SAP BTP landscape, this becomes critical. If you’re building a clean-core extension powered by AI, think about how it will evolve as the business or data changes.

7. Data is the product

Traditional systems run on logic. AI systems run on data. Poor data = poor experience.

In our case, we needed clean, labeled, high-quality images of the chocolate candy under various lighting conditions. Skimping here led to inconsistent results.

The same goes for enterprise apps. Whether you’re integrating AI into maintenance workflows or HR analytics, data preparation must be a design priority.

What would have been different without AI?

  • No need for confidence levels = Less transparency: Traditional systems provide fixed outputs, but AI allows users to understand the certainty of each result, building trust and helping users make informed decisions. 
  • Simpler testing = Less realism: Non-AI products test for rule-following; AI products test for nuance and variability. This complexity might be harder, but it ensures the system adapts to real-world messiness. 
  • No continuous learning = No long-term growth: Without AI, products stay static. With AI, user feedback and data fuel continuous improvement, making the product more relevant over time. 
  • Predictable outputs = Limited intelligence: While easier to debug, rule-based systems lack adaptability. AI adds complexity, but with thoughtful design, it unlocks far greater potential.

Why this matters now

As companies migrate to SAP S/4HANA and extend functionality via SAP BTP, they’re embedding AI into business-critical workflows. But most of these systems are still being designed like traditional apps.

Without AI-first design principles, we risk creating tools that confuse, frustrate, or get ignored….no matter how powerful the backend is.

Trifork is helping customers bridge the AI gap by combining SAP-savvy development with Design Thinking, Ride-Alongs, and hands-on feedback loops.

From chocolate to core business: A real use case in design

We recently explored these principles in an enterprise scenario, designing an AI-powered clean-core monitoring tool for SAP.

Much like the chocolate candy system, it surfaced confidence-rated suggestions. And like our chocolate classifier, it needed:

  • Clear transparency features
  • Manual override options
  • Feedback loops for ongoing improvement

Curious about the design and the initial results? Let’s talk!

Final reflection: Build-Measure-Learn

In AI-driven products, UX design is an ongoing opportunity to deliver value. As AI models evolve and users grow more familiar with intelligent systems, their expectations change too. Continuously refining the experience isn’t just about improvement….it’s how businesses stay competitive, user-focused, and deliver long-term value as the technology and its users grow together.

The Build-Measure-Learn loop not only applies to the product, but to the relationship between humans and intelligent systems.

That’s the mindset shift we need in enterprise UX: moving from static interfaces to a co-evolving experience between user and AI.

The use of AI brought flexibility and scalability to the sample analysis system, allowing it to handle complexity and variation more effectively than a rule-based approach. However, designing for AI also introduced new challenges:

  • Users needed transparency around how results were generated
  • Reassurance when things went wrong
  • Confidence in the system’s decisions

These considerations are often less critical in traditional systems, but in AI-powered products, they’re essential to creating a trustworthy, human-centered experience.

Ready to explore AI in your SAP landscape?

We offer:

  • Ride-Alongs to uncover hidden opportunities
  • AI Discovery Workshops to test AI-driven workflows
  • Masterclasses in AI and SAP BTP

Let’s co-design intelligent systems your users will actually trust and adopt.

For further information please contact:

Marina Epitropaki

AI/UX Designer