Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

UX in the AR/VR Era: Overcoming Challenges to Create Immersive Experiences

UX in the AR/VR Era: Overcoming Challenges to Create Immersive Experiences

Auteur n°15 – David

Augmented Reality (AR) and Virtual Reality (VR) technologies today have reached a level where hardware performance alone is no longer enough to guarantee adoption. Mid-sized Swiss companies and large corporations are looking to put the user experience at the heart of their immersive projects.

In data-rich environments, the success of AR/VR initiatives relies on inventing new interaction methods, simplifying navigation, and integrating artificial intelligence to anticipate needs. Involving UX designers from the design phase is the key lever to reduce the learning curve and encourage natural adoption by employees and customers.

Immersive Interactions: Inventing New Modes of Engagement

AR/VR interfaces require natural interactions without resorting to complex controllers. Voice, gesture, and haptic modes must be designed as UX foundations to ensure rapid adoption.

Natural Voice Command

Using voice in AR/VR allows users to trigger actions without interrupting visual immersion. Modern voice recognition systems rely on AI models to understand context and correct pronunciation errors.

To deliver a smooth UX, it is essential to create a limited, precise vocabulary tailored to business tasks. The design phase should include recording sessions and user testing with end users to refine commands.

For example, an industrial company integrated voice recognition into an AR headset dedicated to maintenance. The interface allowed technicians to consult instructions hands-free, reducing training time by 20% and decreasing execution errors.

Intuitive Gesture Interactions

Gestures are a powerful channel for manipulating virtual objects, but they require rigorous UX design. Movements must feel natural, based on common gestures, and guided by visual or auditory feedback.

Designers use lightweight prototypes to test different types of gestures – swiping, pinching, rotating – and analyze user perception. These trials quickly identify the most intuitive actions.

It is also crucial to provide visual safety zones to avoid uncomfortable movements and ensure ergonomic interactions during prolonged VR sessions.

Haptic Feedback and Sensory Experience

Haptics introduce a tactile dimension that enhances the credibility of virtual objects. Gloves or controllers equipped with vibration motors can simulate resistance or texture.

To design effective feedback, the UX designer maps the key moments where a haptic signal enhances understanding – for example, confirming an action or simulating the feel of a surface, leveraging spatial computing.

An internal project by a training organization paired subtle vibrations with the manipulation of virtual tools during VR exercises. User feedback showed improved procedure recall and an increased sense of realism.

Optimizing Data Visualization in AR/VR Environments

The wealth of information available in immersion can become overwhelming without a UX approach focused on hierarchy. Interfaces must structure and guide the gaze in three-dimensional space.

Information Hierarchy

In an immersive environment, the spatial distribution of data must reflect its importance. Critical elements should be placed in the central field of view and enlarged to draw attention.

UX designers set priority levels for each type of information and use variations in color, size, and animation to distinguish statuses and urgencies through self-service BI.

This approach allows users to instantly spot key indicators without being burdened by unnecessary visual overload.

Contextual Spatial Mapping

Mapping data in 3D space relies on virtual anchors linked to real objects or symbolic zones. These landmarks facilitate navigation and orientation.

By designing modular spaces, the UX designer creates dedicated information zones – for example, one floating panel for alerts and another for tracking data.

Dynamic positioning, adjusted according to the user’s location, prevents overlap and maintains readability at all times.

Modular Interfaces for Dynamic Data

AR/VR environments often embed real-time data streams. Using independent UX modules allows each component to evolve without impacting the overall coherence.

This modular architecture has been proven in prototypes where a weather widget can be quickly swapped for a financial chart without redeploying the entire application.

A financial institution implemented a VR risk simulation platform where each indicator – market data, rates, volume – can be toggled on or off depending on the scenario. This modularity simplified the addition of new data sources and accelerated decision-making.

{CTA_BANNER_BLOG_POST}

Security and Trust: Challenges for Immersive Interfaces

Entering sensitive data in AR/VR and protecting privacy are major UX challenges. Users must feel their information is safe to fully engage.

Sensitive Data Entry

Entering text or confidential numbers in an immersive environment requires alternative solutions: virtual keyboards, voice dictation, or encrypted gesture recognition, ensuring GDPR compliance.

Each method undergoes UX testing to evaluate accuracy, speed, and comfort of use. Automatic correction mechanisms or visual masking are integrated to protect confidentiality.

The design also includes status messages to inform the user of the security level of their input and data processing times.

Biometric Authentication

Biometrics (fingerprints, facial recognition, iris) can simplify immersive access while strengthening security. UX designers ensure a smooth process to avoid frustrations from rejections.

Animations and visual cues guide the user during biometric capture, and fallback alternatives are provided to guarantee access in case of failure.

Integrating biometrics requires strict encryption protocols and secure storage, transparent to the user.

Privacy Protection in Immersion

When the user interacts with content derived from personal data, they must be able to control sharing and retention of that information. Contextual consent screens are placed at the appropriate moments.

A VR activity log can be provided to recap actions performed and allow data deletion at any time.

Integrating AI for a Smooth and Anticipative UX

Artificial intelligence offers proactive support in AR/VR: it anticipates needs, adapts the interface, and reduces cognitive load. UX takes on a predictive and personalized dimension.

Contextual Intelligent Assistants

Voice chatbots or virtual avatars can guide the user step by step in an immersive environment. They provide visual and verbal cues adapted to the context and user progress. This ensures effective AI project management.

Predictive Navigation

Machine learning algorithms analyze user journeys to anticipate next steps – automatically displaying work areas or suggesting gesture shortcuts.

By integrating recommendation models, the interface highlights the most relevant tools or information, reducing search time.

VR A/B tests enable validating adaptive profiles and adjusting trigger thresholds for each user segment.

Adaptive Interfaces

Based on visual fatigue, expertise level, or session duration, AI dynamically adjusts font size, element brightness, or menu complexity.

This limits cognitive overload and extends user comfort, two key factors for the long-term success of immersive projects.

AR/VR UX: Turning Immersion into a Competitive Advantage

Inventing natural interaction methods, clear data structuring, secure exchanges, and AI integration are the pillars of successful AR/VR UX. Each of these dimensions requires early involvement of specialist designers to create intuitive, modular, and scalable interfaces.

By placing the user at the center, companies in healthcare, industry, education, or architecture can accelerate adoption of these technologies and differentiate themselves in the market. Our Edana experts support you in transforming your immersive ambitions into concrete, reliable, and business-focused projects.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Google Stitch (AI-Driven UI Design): Revolution or Just a Prototype Accelerator? (Full Analysis and Real-World Applications)

Google Stitch (AI-Driven UI Design): Revolution or Just a Prototype Accelerator? (Full Analysis and Real-World Applications)

Auteur n°15 – David

The promise of rapidly turning an idea into an interface prototype grabs the attention of product teams and CIOs, especially when AI is driving it. However, understanding precisely what Google Stitch delivers and what it cannot replace is crucial to avoid disappointments during the design phase. The goal is not to produce a final design but to accelerate the initial scoping steps and test concepts. This article analyzes Google Stitch’s practical workings, illustrates its added value in professional settings, details its limitations, and offers a pragmatic integration approach within a high-quality UX/UI workflow.

Understanding Google Stitch and How It Works

Google Stitch generates mid-fidelity mockups from detailed, structural prompts. It doesn’t produce production-ready designs but serves as an accelerator for the design phase.

Core Principles of Google Stitch

Google Stitch relies on a generative AI engine trained on thousands of UI components. The tool interprets a prompt describing the product context, target user, screen objective, and constraints. It then generates a structured mid-fidelity mockup with generic components and logical hierarchy.

This generation draws on internal libraries of interface elements (buttons, input fields, menus). The AI automatically positions these blocks to quickly deliver a usable foundation. The result is both visual and code-based but may not reflect your brand guidelines or internal patterns.

In practice, the render quality directly depends on the prompt’s precision. The more detailed it is, the more Stitch will provide a relevant structure that respects information hierarchy and expected interactions. It’s a framing tool, not a final design studio.

Practical Workflow: From Prompt to Interface

The entry point is always a structured prompt. It must cover nine elements: product context, target user, screen objective, screen type, hierarchy, components, visual direction, constraints, and expected outcome. This framework guides the AI and maximizes output relevance.

Once the prompt is submitted, Stitch returns a mid-fidelity interface as screenshots and exportable assets. Components are coded in HTML and Tailwind or integrated into a raw Figma file. This initial output allows quick visualization of layout and key interactions.

The iteration cycle is immediate: prompt adjustments, theme changes, repositioning of elements. Each iteration takes minutes, compared to several hours in a classic workflow, freeing the product team from the initial wireframing stages. This is detailed in our high-fidelity wireframing guide.

Use Case: Product Scoping Phase

A logistics company wanted to redesign its delivery tracking dashboard. They described the context, the user (operations manager), the objective (real-time status visualization), and constraints (display on a large screen).

In under ten minutes, Stitch generated a mid-fidelity prototype featuring a timeline, geolocation maps, and an anomaly reporting tab. The team was able to focus on business flow organization before engaging a designer to style the interface.

This example demonstrates that Google Stitch acts as a scoping accelerator, offering a visual discussion base during internal workshops without claiming to replace detailed design work.

Use Cases and Added Value of Google Stitch

Google Stitch breaks the blank canvas syndrome by instantly generating layout ideas. It stands out as an ultra-fast prototyping tool for MVPs and workshops.

Getting Over the “Blank Canvas” to Start Stronger

The main value of Stitch is to immediately provide a UI structure draft. No more starting projects with an empty grid; the team saves time by working from a first visual prototype rather than composing a blank layout.

This is particularly critical during rapid scoping phases or product hypothesis testing. UX writers, product managers, and project leads can annotate and validate the screen hierarchy without waiting for lengthy designer work.

Having a mid-fidelity interface allows comparing multiple UX directions in a few clicks, enriching the discussion and facilitating decision-making in an agile environment.

Accelerating Workshops and Rapid Prototyping

In collaborative workshops, Stitch autonomously generates multiple layout proposals. Business teams can test usage scenarios in real time and guide component choices before a dedicated designer is on board.

A financial services company integrated Stitch into an internal design sprint. Within two hours, three dashboard variations were produced, compared, and validated. This process cut the initial iteration time by 50%.

This example shows that, in a workshop context, Google Stitch enables rapid UX exploration, aligns stakeholders, and prepares a precise visual brief for the designer.

Idea Generation and Rapid Validation

When a product manager wants to explore various navigation or data presentation approaches, Stitch delivers several proposals in a single session. The team can annotate each version and identify the most relevant patterns.

This iterative mode greatly accelerates product exploration phases. Hypotheses are visually validated before allocating a more substantial design budget.

In summary, Google Stitch creates value from the project’s first half-day by quickly aligning UX, product, and management around a concrete visual prototype.

{CTA_BANNER_BLOG_POST}

Limitations and Pitfalls to Avoid with Google Stitch

Google Stitch doesn’t replace the depth of a human designer for differentiation and coherence. Exported code and Figma files often require significant cleanup.

Risk of Generic Design and Lack of Differentiation

Interfaces produced by Stitch rely on recurring, standardized patterns. The aesthetic can feel generic, lacking strong visual identity, which limits the final application’s differentiating impact.

For a product requiring a strong brand imprint or in-depth design thinking, Stitch’s proposals remain basic. They don’t account for the subtleties of your unique visual universe.

Exclusive reliance on Stitch can result in a flat, uniform look across different projects within the same organization, whereas a human designer brings coherence and uniqueness.

Inconsistent and Poor-Quality Figma Outputs

Figma exports generated by Stitch abound with layers, poorly named groups, and duplicated components. The file structure quickly becomes hard to maintain and evolve.

In many cases, the team spends more time reorganizing or deleting unnecessary layers than building the final design. The hidden cost of prep work can negate the initial time savings.

This issue intensifies when multiple screens are generated without shared tokens or variables. Interaction patterns can diverge from one page to another, requiring manual harmonization.

Deployment to Production: Often Unusable Code

Stitch offers HTML/Tailwind exports, but the code isn’t optimized for production contexts. Classes are redundant, tags sometimes lack accessibility, and maintenance becomes costly.

Code consumers often have to refactor it entirely before any deployment, undermining the AI prototyping benefit.

For complex business applications, directly integrating generated code is illusory. The tool remains a mockup generator, not a front-end framework ready for production.

Integrating Google Stitch into a Professional UX/UI Workflow

Pairing Google Stitch with a professional approach maximizes its potential without sacrificing final quality. It should sit at the process’s start, before the designer’s involvement and design system structuring.

Recommended Workflow: From Ideation to Design System

Step 1: Use Stitch for rapid layout exploration and to validate information hierarchy during co-creation workshops. Structured prompts define screen scope.

Step 2: Engage a designer to consolidate proposals, create detailed components, and style them according to brand guidelines. This phase ensures coherence and differentiation.

Step 3: Integrate everything into a centralized design system in Figma or another collaborative tool, with tokens, variables, and usage documentation. Developers can then extract clean code.

Tips for Writing Effective Prompts

Precision is the sine qua non for an exploitable output. Follow this framework: product context, target user, screen objective, screen type, hierarchy, components, visual direction, constraints, and expected result.

Each section should be written in a clear sentence. Constraints (responsive, accessibility, performance) must be specified. Visual directions (flat design, minimalist, neumorphic) guide the AI toward a coherent style.

A well-structured prompt avoids generic proposals and gets you closer to a truly exploitable prototype for the project’s next stages.

Collaboration between Stitch and Human Designers

Human designers step in after the initial AI iterations to overhaul the structure and style the components. They must review hierarchy, typography, color palette, and interaction fluidity.

This dual approach combines AI’s speed with human craftsmanship’s finesse. Designers save time on low-value tasks and focus on the overall experience.

The designer’s role elevates from prototype creator to experience architect, ensuring coherence, accessibility, and UX performance in the final product.

Training and Upskilling Teams

To fully leverage Stitch, product owners, UX writers, and project managers must understand prompt logic and AI iteration best practices. Internal training facilitates adoption.

Workflow tools should be rethought to include Stitch in sprints: prompt templates, feedback documentation, Figma export and asset management.

This skill development turns Stitch into a true innovation accelerator within autonomous teams aligned on product and technical priorities.

Maximize Your UI Creativity with AI Without Sacrificing Quality

Google Stitch is a fantastic springboard for scoping and prototyping phases, generating structured mid-fidelity mockups in minutes. It frees teams from the “blank canvas,” enables rapid iteration, and lets you test multiple UX hypotheses before investing in an extensive design budget.

Its limitations lie in the generic quality of proposals, sometimes chaotic Figma exports, and the inability to use code directly in production. The key is to treat it as an upstream accelerator, integrated into a structured workflow combining designers and developers.

Our Edana experts support you in defining a tailor-made process aligned with your business context and training your teams to intelligently leverage Google Stitch. Gain the optimal balance between execution speed and UX/UI excellence while preserving cohesion and scalability in your solution.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Figma vs Sketch: Cloud Collaborative Tool or Native Mac Performance for UI Design?

Figma vs Sketch: Cloud Collaborative Tool or Native Mac Performance for UI Design?

Auteur n°15 – David

In a landscape where product teams strive to combine speed, consistency, and agility, selecting a UI design tool is no longer just a matter of comparing features.

It shapes how your designers, developers, and management collaborate on a daily basis. Between Figma’s cloud-native approach and Sketch’s native Mac performance, organizational, governance, and efficiency challenges come into play. This article explores these two philosophies—individual studio versus connected workshop—and offers insights to help you determine which aligns best with your product culture, IT infrastructure, and strategic objectives.

Native Mac Tool vs. Cloud-Native: Technical Foundations

Where the application resides dictates its strengths and limitations. Choosing a desktop or cloud solution impacts the performance, security, and governance of your digital ecosystem.

Installation, Performance, and Control

Sketch installs locally on macOS, fully leveraging the machine’s hardware resources. Each project benefits from smooth execution, even with large or complex files, thanks to the native optimization of its vector format.

In offline mode, designers retain full control over their files and data confidentiality. This approach reduces external dependencies but requires rigorous backup and version management.

This model is particularly well-suited to organizations that prioritize full ownership of their infrastructure and already maintain a homogeneous Mac environment.

Storage, Backup, and Version Management

Sketch offers local storage by default, supplemented by an optional cloud service. Versions are managed manually or via a source control system, providing precise traceability but requiring documented processes.

By contrast, Figma records each change in real time on its servers. Versions stack automatically and can be accessed at any time through a detailed history.

This continuous recording reduces the risk of data loss or version conflicts but relies on trust in the provider’s cloud infrastructure.

Case Study: Swiss Industrial SME

A human-scale industrial SME specializing in mechanical equipment chose Sketch for its in-house design workshops. Its teams appreciate the offline stability during prototyping sessions on-site, often without reliable connectivity. They develop highly detailed mockups before manually versioning them in an internal Git repository. This approach has proven that disciplined, even sequential, management can maintain optimal rendering quality while meeting the sector’s security requirements.

Real-Time Collaboration vs. Sequential Workflow: Impacts on Agility

Instant collaboration changes the game for distributed teams. A sequential workflow, however, can offer a more linear and less verbose path.

Real-Time Collaboration with Figma

Figma operates like a shared document: multiple designers—and even developers—can co-edit simultaneously. Visible cursors, built-in comments, and instantaneous updates streamline interactions.

Sharing is as simple as sending a URL—no export or import needed. Developers access CSS specifications and assets directly, eliminating back-and-forth emails and file attachments.

This approach significantly accelerates time-to-market, especially when teams are geographically dispersed or operate in a remote-first environment.

Sequential Workflow with Sketch

In Sketch, each designer works locally on a version. Files are then shared via Sketch Cloud or an internal sharing tool. Feedback often comes through asynchronous comments or review meetings.

This process provides a sense of individual control and linearity, with less “collaborative noise” for some profiles. Each designer focuses on their scope before finalizing a version for the developers.

For smaller teams or projects less prone to frequent changes, this model can be more suitable and less distracting.

Real-World Example: Swiss Public Agency

A cantonal department responsible for an internal management platform adopted Sketch for its centralized IT team. Designers deliver approved screens in meetings, and development teams integrate these mockups sequentially. The simplicity of this workflow reduced meetings and file duplication conflicts while ensuring strict traceability in line with regulatory requirements.

{CTA_BANNER_BLOG_POST}

Performance, Ecosystem, and Extensions: Between Maturity and Modernity

Longevity in an ecosystem doesn’t equate to obsolescence. The agility of an integrated platform can offer gains in speed and flexibility.

Sketch Plugins and Ecosystem

Since 2010, Sketch has seen the emergence of a multitude of third-party plugins covering needs such as design systems, export, accessibility, and advanced prototyping. Their robustness and stability are proven across many organizations.

Established workflows rely on these extensions to automate recurring tasks or integrate Sketch with project management and versioning tools.

However, plugin installation and updates remain manual, sometimes requiring IT administrator approval for security reasons.

Figma’s Integrated Ecosystem

Figma provides a plugin marketplace accessible directly in the interface, with one-click installation and automatic updates. Web services (content management, accessibility tools, shared libraries) connect natively.

The Figma Community hosts templates, UI kits, and resources shared by thousands of creators. This modern ecosystem promotes rapid innovation and the adoption of new practices.

For organizations seeking fluidity and interoperability, a CI/CD integration reduces time-to-value and lessens the IT burden related to updates.

Real-World Example: Swiss Fintech Startup

A young fintech startup moved to Figma from day one to align its design, product, and development teams. Advanced prototyping and accessibility plugins were integrated effortlessly, and shared libraries instantly standardized UI components. This rapid deployment demonstrated the value of a cloud-native ecosystem for a high-growth structure, fostering continuous iteration and feedback integration.

Choosing Based on Your Context and Strategic Perspective

The best tool is the one that fits your organization and product culture. The general trend is moving toward cloud-first, but every context remains unique.

Technical and Operational Criteria

If your fleet is exclusively Mac and you work primarily offline, Sketch delivers unparalleled performance with large files. It minimizes latency and maximizes stability for intensive graphic workshops.

Conversely, for automated versioning hygiene, cross-OS accessibility, and continuous design system deployment, Figma stands out with its cloud infrastructure and built-in version history.

License costs, IT access management, and your cloud security strategy or local security approach are all factors to weigh in your decision.

Organizational Alignment and Product Culture

Remote-first or distributed teams favor immediate co-creation; Figma facilitates this mode of work. More hierarchical structures, with separate design, QA, and development roles, may prefer the sequential cycle of Sketch.

More broadly, an agile culture—favoring rapid iteration, continuous feedback, and cross-functional sharing—naturally aligns with a cloud-native tool. Conversely, a structured, planned workflow may find the stability it needs in Sketch.

Trends and Outlook for 2026

The market is converging toward unified platforms that combine design, prototyping, and development in a collaborative environment. Expectations center on tighter integration with CI/CD toolchains and product management systems.

Vendors are also betting on AI to accelerate component creation and variant generation for competitive advantage. Choosing a cloud solution allows you to benefit from these advances quickly, without local updates. Explore AI as a Service to avoid the complexity of in-house development.

However, desktop preference will persist in certain regulated or sensitive sectors, where data sovereignty and native performance remain priorities.

Collaborative Workshop or Personal Studio: Choose Your Product Philosophy

Figma and Sketch illustrate two complementary visions of UI design. Sketch offers a robust, controlled, and high-performance environment for sequential workflows on Mac. Figma, on the other hand, provides a shared, extensible, and accessible space for agile and distributed teams. The choice should be made according to your IT architecture, organization, and the maturity of your collaborative processes.

Regardless of your choice, aligning the tool with your product governance objectives, time-to-market challenges, and company culture is crucial. Our experts are ready to help you analyze your workflows, technical constraints, and growth strategies to select the most suitable and high-performing solution for your teams.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

What to Make of Apple’s Liquid Glass? UI Revolution… or Underestimated Product Misstep?

What to Make of Apple’s Liquid Glass? UI Revolution… or Underestimated Product Misstep?

Auteur n°15 – David

Since the announcement of Liquid Glass, Apple’s new interface layer, conversations have naturally focused on its beauty and aesthetics. Yet behind these transparency and blur effects lies a critical question for every IT team: is this a genuine ergonomic improvement or a hidden regression masked by a spectacular finish? Liquid Glass embodies Apple’s ambition to evolve the mobile UI toward greater depth and contextual dynamism.

But what are the real impacts on readability, accessibility, and UX consistency? This article unpacks the concrete challenges for organizations and suggests ways to leverage this trend without compromising the user experience.

Ambitions of Liquid Glass

Liquid Glass is not merely an aesthetic facelift. It represents Apple’s desire to set a new post-iOS 7 standard.

This interface aims to move beyond flat design by reintroducing depth, micro-effects, and contextual dynamism.

Origin and Goals of the Project

According to Apple, Liquid Glass marks a major step toward a “living” interface, where every transition becomes a perceivable micro-event. The idea is to break away from a flat surface to offer a sense of relief and texture, making interaction more intuitive and engaging.

To achieve this, Apple has developed a system of semi-transparent layers combining dynamic blur and subtle animations. These elements adapt in real time to content and user gestures.

In practice, this approach seeks to rival “neumorphism” effects while maintaining the rigor of Apple’s design guidelines. The goal is clear: to provide a premium, differentiating positioning for both third-party apps and the native system.

Visual and Experiential Promises

Aesthetically, Liquid Glass captivates with its fluidity and organic rendering. Each panel appears to float above the content, creating a sense of depth absent from ultra-flat interfaces.

Beyond the “wow” effect, Apple touts enhanced comprehension: interactive zones stand out more clearly thanks to blurred edges and dynamic shadows designed to guide the eye naturally.

The contextual dimension comes into play when transparency automatically adjusts contrast and saturation based on the wallpaper and ambient light. In theory, UX is enriched by a more immersive experience.

Example of a Swiss Pilot Project

A Swiss SME in the medical sector integrated Liquid Glass into its internal appointment-booking app. The team aimed to modernize the UI and strengthen its brand image with both patients and staff.

The result—fluid and elegant—was immediately praised by users during the initial “showroom” demonstrations. This example shows that Liquid Glass can become a powerful marketing asset.

However, the project revealed the need to finely tune contrast and blur settings for different user profiles, or risk losing readability in key sections (scheduling, critical notifications).

Accessibility and Readability

Transparency and blur can compromise contrast stability. Text readability becomes highly dependent on background content.

This variability risks non-compliance with WCAG 2.1 recommendations and carries legal and business consequences.

Unstable Contrast and Background Dependency

When text floats over semi-transparent areas, its contrast relies entirely on the underlying content. A dark background enhances readability, whereas a colorful or bright image can render text virtually illegible.

Multiple real-world tests showed contrast ratios falling well below the 4.5:1 threshold recommended by WCAG 2.1. Alerts and action buttons can then go unnoticed.

Compliance with Accessibility Standards

Failing to meet WCAG criteria exposes organizations to legal and reputational risks. European legislation, such as the Accessibility Act, now imposes strict requirements on digital interfaces.

To be accessible, a UI must guarantee a minimum contrast between text and background. Liquid Glass, with its adaptive transparency, violates this rule unless supported by reliable fallback mechanisms.

Businesses must plan workarounds: options to disable effects, high-contrast themes, or dynamic adaptation driven by developers.

Illustration from a Training Institute

An online registration portal for a training institute adopted Liquid Glass. Initial visual feedback was positive, notably from the marketing team.

However, accessibility testing by the quality team revealed several form-entry workflows unusable for low-vision participants. Form instructions vanished against certain custom wallpapers.

This case demonstrates that a visually attractive interface can quickly become an operational obstacle if readability is not prioritized from the outset.

{CTA_BANNER_BLOG_POST}

Variability of the Experience

Liquid Glass’s experience varies with wallpaper, lighting, and device generation. Design becomes less predictable.

This variability increases the risk of inconsistent UX and a perception of an “unfinished” product.

Influence of Wallpaper and Ambient Light

Liquid Glass rendering adjusts automatically to the chosen background image. A texture-heavy or brightly colored wallpaper can turn a menu into an unreadable blur.

Likewise, in low-light environments, transparency can appear too pronounced, creating a halo effect that hinders visual focus.

These external conditions are beyond product teams’ control. It becomes impossible to guarantee a uniform experience across all user profiles.

Variability by Device Generation

Apple’s latest screens deliver HDR rendering and higher brightness, showcasing blur and dynamic effects perfectly. On older models, the same code produces a dull, grainy result.

This fragmentation leads to unequal experiences: some users enjoy a “premium” interface, while others see a half-baked tool.

For organizations, this demands extensive testing across multiple device generations and more frequent updates and maintenance of graphical settings.

Example from a Retail Chain

A retail chain deployed Liquid Glass in its internal stock-management app. Newer tablets displayed a crisp, fluid visual, while older models exhibited artifacts and poorly defined blur.

This led to dual support cycles: one team optimizing for iPad Pro models and another manually adjusting configurations on first-generation iPads.

The incident highlights the importance of assessing hardware impact before rolling out a highly dynamic design across an entire fleet.

Business Risks and Best Practices

Liquid Glass can be a product differentiator if mastered properly. Otherwise, it becomes a source of complexity and UX degradation.

Organizations should avoid blindly copying this trend without adapting their design and testing processes.

Legal, Reputational, and Accessibility Stakes

Overlooking accessibility constraints invites legal action and costly remedial work. WCAG compliance is no longer optional—it’s a regulatory imperative in Switzerland and Europe.

Excluding user segments (e.g., low-vision users, seniors) damages brand image and limits the reach of both enterprise and consumer applications.

Executive leadership must treat accessibility as a strategic component, not a decorative afterthought.

Increased Product Cycle Complexity

Designers have to produce multiple mockups for each background state and user profile. Developers implement conditional logic to adjust blur, contrast, and colors.

This technical overhead lengthens development time and drives up maintenance costs. Every iOS or app update may require a complete review of Liquid Glass settings.

Total cost of ownership skyrockets, diverting resources from higher-value business features.

Common Pitfalls and Good Practices

The first mistake is prioritizing design over readability. Any effect that hinders comprehension should be disabled or confined to non-critical areas.

Many teams blindly replicate Apple’s approach without contextualizing their business needs. The result often works in demos but collapses in real-world use.

High-quality products leverage Liquid Glass selectively: micro-effects on key buttons, clearly delineated content zones, automatic opaque fallback when contrast drops.

Crucially, test on varied backgrounds, across device generations, and under real-world conditions before full-scale deployment.

Maintaining Visual Innovation without Sacrificing Experience

Liquid Glass is a true visual advance, but its implementation requires discipline and adaptability. Dynamic effects must remain enhancements, not obstacles to comprehension.

Contextual and dynamic design is a powerful trend, yet each variation introduces risk. It’s wiser to precisely control application areas and plan robust fallbacks to manage complexity.

IT directors, digital transformation leads, and project managers: our experts are ready to guide you through a thoughtful integration of Liquid Glass or any other design standard. Together, we’ll ensure a premium, compliant experience that balances innovation, accessibility, and performance.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Efficient Prototyping in Figma: 6 Principles for Building Complex Flows Without Wasting Hundreds of Hours

Efficient Prototyping in Figma: 6 Principles for Building Complex Flows Without Wasting Hundreds of Hours

Auteur n°15 – David

In a modern product environment, high-fidelity prototyping ceases to be a mere visual aid and becomes a true strategic lever. Too often seen as lengthy, repetitive, and fragile, it can actually accelerate UX validations, align product and engineering teams, and reveal user dead ends. The key lies not in the tool, but in the method.

Core Principles of Effective Prototyping

Adopt clear navigation and modular interactive components to reduce duplication. Organize your interactions around variables to manage state changes without multiplying frames.

Navigation

Simple navigation relies on the “one click → page change” principle. Each interaction triggers a move to another frame, ensuring a purposeful and understandable flow.

In Figma, set your triggers to “On click” with the “Navigate to” action to move forward, and use “Back” for backward navigation. This approach automatically reverses the animation and minimizes the risk of misconfiguration.

By rigorously distinguishing Pages from Frames, you maintain an organized workspace and limit the number of elements to handle during updates. A clear structure eases iterations and collaboration between designers and developers.

Interactive Component

An interactive component changes state with each interaction, without relying on new frames. Ideal for multi-state buttons, toggles, checkboxes, or radio buttons.

Create a component with multiple variants for each visual state, then link them using “Change to”. To grasp the importance of prototyping, see our guide on prototyping. A single component can be reused infinitely, significantly reducing duplication.

Maintenance becomes more efficient: modifying the parent component propagates changes everywhere. Your teams gain agility, and design or content updates are applied in a few clicks.

Event (via variables)

Variables allow changes in other elements following an interaction. For example, altering text, displaying or hiding a pop-in, or adjusting a price dynamically.

In Figma, create variables tied to layer properties, then trigger them with “Set Variable”. All elements linked to that variable update instantly.

This approach eliminates the need for hundreds of frames, simplifies managing complex scenarios, and makes your prototypes more robust against evolving product requirements.

Example: A financial institution restructured its prototype around interactive components and variables for a subscription flow. The result: screen duplication was cut by two-thirds, and each iteration took under 30 minutes instead of several hours.

Advanced Combinations for Complex Flows

Combine navigation, components, and variables to orchestrate rich user scenarios. Chain actions in the proper order to maintain fluidity and visual coherence.

Navigation + Event

Combining “Navigate to” and “Set Variable” lets you display a new screen while initializing necessary states. Order matters: trigger navigation before setting the variable to avoid abrupt visible changes.

This combination is ideal for simulating conditional loadings or transitions that depend on a business state (for example, choosing a subscription plan that determines which offers to display).

This avoids creating extra frames for every possible state while keeping the prototype faithful to the real application’s interactions.

Interactive Component + Event

An interactive component changing state can trigger a global event. For instance, selecting a pricing plan changes the button’s appearance and updates the price shown elsewhere.

Structure your component with internal variants for appearance, then add a “Set Variable” to trigger the global update. You minimize duplication and centralize business logic.

This accelerates UX iterations: a style or content adjustment is made at a single point, without starting from scratch for each variation.

Interactive Component + Navigation

For a “press” effect followed by a redirect, combine “While pressing” for animation and “On click” for navigation. The interaction feels more realistic, with immediate visual feedback.

Set the “Pressed” variant to the “While pressing” action, then use “On click” to navigate to the next frame. This best practice ensures an interactivity perception close to the final product.

You avoid rigid prototypes where navigation triggers on click without a press effect, improving flow comprehension during validation sessions.

Example: A Software-as-a-Service vendor combined components and variables for a non-linear purchase funnel. The prototype managed four business paths with fewer than ten frames, versus over sixty previously, and user tests showed a better understanding of complex scenarios.

{CTA_BANNER_BLOG_POST}

Common Mistakes and Modular Structure

Avoid managing everything in a single component and multiplying “On click” triggers. Favor a modular architecture to ensure scalability.

Mistake 1: Handling Everything in the Component

Centralizing all interactions and states in a single component can lead to exploding contexts. The component becomes hard to maintain and loses readability.

Each added state or interaction exponentially increases variant complexity. Teams lose agility and risk introducing bugs during updates.

It is recommended to split logic into several atomic components, each responsible for one specific interaction or state to limit change surfaces.

An atomic design approach promotes reuse and ensures centralized updates while avoiding unexpected side effects.

Mistake 2: Double “On click” Trigger

Placing an “On click” on both the parent component and the instance leads to conflicts: only the higher-level trigger executes.

Interactions then fall through the cracks, making the prototype incoherent and frustrating for testers. Some actions won’t appear, or navigation won’t trigger as expected.

To prevent this, keep triggers at the most relevant level: within the component for internal state changes, at the instance level for global navigation or cross-event triggers.

This discipline ensures clear traceability of interactions and easier maintenance.

Structured Solution

Adopt a modular architecture inspired by atomic design: atoms for simple visual elements, molecules for stateful components, organisms for complex assemblies.

Variables manage global interactions, while components handle their local rendering. Each level has a clear responsibility, reducing cross-dependencies.

This modular structure doesn’t hinder creativity: it allows rapid prototyping of complex flows, sharing patterns, and ensuring coherence at scale.

Collaboration among designers, product owners, and developers becomes smoother because everyone knows where to adjust behavior or visuals.

Non-linear Prototype as a Strategic Lever

A non-linear flow reveals user errors, allows backward steps, and simulates real-world dead ends. You gain deeper insights during usability testing.

Limits of a Linear Flow

A linear prototype presents a perfect journey without options to go back or explore alternative scenarios. Testers rarely encounter error cases.

This overly controlled simulation doesn’t reflect reality: users don’t always act as expected and often face dead ends that remain invisible in a rigid prototype.

The result is biased validation, which smoothes over friction points and leaves your team unprepared for real production interactions.

Benefits of Realistic Testing

A non-linear prototype integrates backward navigation, input errors, and alternate paths. You observe how a user reacts to a crash, missing data, or an unintended choice.

Such usability tests reveal unexpected friction points and guide product priorities. You anticipate deviant behaviors and plan workarounds before launch.

The prototype becomes a true training ground where each business logic is tested against real usage diversity.

In-depth Insights in Usability Testing

Non-linear scenarios yield richer qualitative data: where does the user get stuck, which errors do they correct, and how do they navigate after an impasse?

These insights guide both the product roadmap and technical priorities. You reduce post-launch surprises and improve user satisfaction.

The prototype thus becomes a more complete validation tool than a simple mock-up, aligning product, design, and development teams more effectively.

Example: A logistics operator implemented a non-linear prototype for its shipment tracking interface. Tests uncovered dead ends during anomaly reporting, enabling workflow revisions that reduced manual escalations by 40% in production.

Turn Your Prototypes into a Competitive Advantage

By structuring your prototypes around navigation, interactive components, variables, and their combinations, you move from a repetitive, fragile prototype to a modular, scalable architecture. Common mistakes are avoided through an atomic design method, and non-linear flows provide crucial insights to anticipate real-world usage.

Whether building a new product or optimizing an existing flow, our experts are ready to define the most suitable prototyping structure for your context. They will help you save time, reduce duplication, and speed up validation before development.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Dark Mode: How Dark Mode Enhances User Experience and Perceived Quality

Dark Mode: How Dark Mode Enhances User Experience and Perceived Quality

Auteur n°15 – David

Dark mode has become an essential choice for modern interfaces, offering an aesthetic alternative to traditional light backgrounds. It’s no longer just a trend but a true lever for improving visual comfort and the perceived quality of a digital product.

Properly designed, it reduces eye strain in low-light environments, strengthens brand image, and can even have a positive impact on device battery life depending on display technologies. In a context where every UX detail counts, understanding and mastering dark mode becomes strategic for any organization aiming to deliver a no-compromise user experience.

Understanding Dark Mode

Dark mode reverses the usual light background and dark text scheme to offer a dark background with light text. This configuration spread rapidly with the advent of smartphones and is now standard across operating systems, browsers, and applications.

Origin and Adoption

The concept of dark mode emerged to meet the demand for more comfortable screen use in low-light environments. Originally, early prototypes were reserved for low-power devices, notably e-ink readers.

With the arrival of OLED and AMOLED displays, the idea of reducing overall brightness became more relevant. Dark pixels consume less power while still providing high contrast for light text.

Gradually, operating system vendors and app developers built in native light/dark toggles, encouraging users to favor this mode in their daily routines.

Visual Mechanism

In a traditional scheme, a light background illuminates dark text for good readability in bright conditions. In dark mode, it’s the opposite: the interface relies on a dark background, and light elements draw the user’s attention.

This inversion leverages contrast management to ensure text remains optimally legible. Designers often use deep grays rather than pure black to soften the overall effect.

The result is a cleaner-looking interface where design elements and main content stand out while easing eye strain in dimly lit spaces.

Evolution in Operating Systems and Apps

Major operating systems now include a system-wide toggle between light and dark, ensuring a consistent experience regardless of the usage environment. This uniformity simplifies life for both users and developers.

For example, a Swiss digital banking services provider deployed a dark theme in its mobile app. This change led to increased nighttime engagement, demonstrating the practical benefits of dark mode for after-hours use.

In large enterprise software suites, the theme switches automatically based on local time or ambient light, offering consistent comfort tailored to context.

Visual Comfort and Perceived Quality

Dark mode can significantly reduce eye strain in low-light conditions. It also enhances the perception of modernity and quality in a digital product.

Reducing Eye Strain

In dimly lit environments, a dark background narrows the luminance gap between the screen and its surroundings, preventing the eyes from overexerting to focus. The result is a more pleasant reading experience and reduced ocular tension.

However, this benefit isn’t universal: in bright daylight, a dark interface may suffer from glare or light scattering due to backlighting, making reading harder and increasing visual effort.

A hybrid approach—combining automatic ambient light detection with system preference tracking—optimizes the experience by adapting the theme to the context.

Impact on Perceived Modernity

Adopting dark interfaces is often associated with innovation and refined design. Dark interfaces convey a sense of sophistication, highly valued in tech and creative industries.

For business systems or data-focused applications, the high contrast highlights dashboards and visualizations, enhancing the readability of key information.

Seamless light/dark coexistence can thus become a differentiator, contributing to the premium positioning of a digital product or enterprise platform.

Inclusivity and Accessibility

A well-designed dark theme accounts for accessibility criteria, ensuring sufficient contrast for users with visual impairments such as light sensitivity or presbyopia.

A Swiss medical technology firm implemented dark mode on its patient monitoring platform. Teams noted fewer negative feedback incidents related to reading medical information on screen, especially among older users.

Thus, an inclusive dark mode must adhere to WCAG contrast standards and provide adjustable font size or weight for each component.

{CTA_BANNER_BLOG_POST}

Dark Mode and Energy Efficiency

Dark mode can help extend device battery life depending on the display type. This energy optimization is most notable with OLED technologies.

Power Consumption on OLED Displays

On an OLED screen, each pixel emits its own light. A black or very dark pixel is effectively turned off, reducing the display’s overall power draw.

Internal studies show that dark-dominated apps can save up to 20% of battery on certain smartphones. This effect is particularly pronounced during heavy, prolonged usage.

However, the impact varies with the brightness level and the proportion of dark elements shown on screen.

Backlighting on LCD Screens

On an LCD, the backlight is always on regardless of color. Energy savings from dark mode are therefore marginal.

In practice, the energy gain on an LCD-based laptop is often under 5%. This figure should be weighed against visual comfort benefits when evaluating the theme’s relevance.

In some cases, a very dark background may even incur a slight power penalty if the backlight increases to compensate for perceived dimness.

Implications for Mobile Apps

Mobile applications can leverage dark mode to bolster their autonomy argument, especially in industrial or field contexts where recharging isn’t always immediate.

In logistics, for instance, a mobile ERP set to a dark theme allows warehouse operators to extend their work cycles without fearing a dead battery at day’s end.

It remains essential to sync the theme with system settings automatically and offer a manual toggle to prevent discomfort.

Design Best Practices for Successful Dark Mode

A successful dark mode goes beyond simply inverting colors: it requires precise palette calibration, state management, and contextual toggling. This approach ensures optimal readability and a coherent experience.

Choosing Colors and Appropriate Contrast

Avoid pure black (#000000) for backgrounds to reduce visual tension and soften the interface. Very dark grays provide an elegant, more comfortable alternative.

Main text should offer high contrast, while secondary or helper text can use slightly less bright grays to establish information hierarchy.

Using an off-white for light elements prevents the glare of pure white against a dark background.

Managing States and Components

Each component must have variants for hover, active, disabled, or error states. Focus indicators, for example, should maintain the same contrast level as primary text.

Feedback colors (success, error, warning) require specific tuning—vivid hues on a dark background can appear oversaturated and fatiguing if not properly calibrated.

A component library dedicated to dark mode within your design system ensures consistency and speeds up development.

Clear, Contextual Toggling

Providing a visible manual toggle in the application or platform settings gives users control over dark mode regardless of their environment.

A Swiss online training provider implemented a contextual setting: the interface switches to dark mode automatically at sunset while still allowing user override. This setup led to higher evening module completion rates.

For an even more seamless experience, ambient light detection can adapt the theme in real time, ensuring constant comfort without extra user action.

Optimize Your Interface with a Mastered Dark Mode

Dark mode is a strategic asset for modernizing an interface, reducing visual fatigue, and—depending on the screen—improving device autonomy. It should be designed as a full-fledged theme, with adjusted palettes, dedicated states, and a clear toggle ideally aligned with system settings.

Our experts are ready to help you implement a tailor-made dark theme that addresses your business challenges and the specifics of your digital ecosystem.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Figma & Developer Handoff: A Comprehensive Guide to Successful Design–Development Collaboration

Figma & Developer Handoff: A Comprehensive Guide to Successful Design–Development Collaboration

Auteur n°15 – David

In a landscape where a digital product’s success relies as much on team alignment as on code quality or design daring, the handoff between designers and developers is a critical juncture. A poorly executed handoff introduces unnecessary friction, information loss, and costly iterations, undermining time-to-market and end-user satisfaction.

Once limited to sending static mockups, this process is now reinvented by Figma, which has become a true product production hub. Beyond simply handing off pixels, teams now establish a structured, traceable, and continuous collaboration that ensures UX/UI specifications are implemented faithfully and swiftly.

Fragmentation of the Handoff Before the Figma Era

The traditional handoff relied on scattered exchanges, causing confusion and inconsistencies. Static documents and multiple emails fractured the product vision and delayed releases.

Sending Static Files

In the classic model, designers would send Sketch, PSD, or PDF files—often compressed in large emails. Each new version required a fresh send, increasing the risk of working on an outdated iteration. Developers could easily implement obsolete elements without a clear reference to the approved version.

Informal chats or meetings would sometimes patch the gaps, but without centralization. The result: partially or incorrectly implemented features that were only corrected afterward. This fragmentation created a rework cycle that could significantly delay the overall delivery.

The time spent clarifying which version was current ate into schedules. Teams often found themselves stuck, waiting for confirmation of the official version before proceeding, creating bottlenecks.

Isolated Documentation and Multiple Versions

To describe interactions or dynamic behaviors, designers drafted a separate specification document—often a Word or PDF file—aligned with the product workflow. This document wasn’t linked directly to the graphic files. Developers had to navigate between two disparate sources to understand the UX intent.

These documents weren’t always updated with each mockup iteration, leading to discrepancies between written specs and the actual interface. Developers entered into a question-and-answer loop, slowing development velocity.

Version tracking became a challenge for IT project managers, who had to manually verify that graphic version numbers matched development commits. This coordination overhead consumed precious resources.

Project Consequences

A mid-sized Swiss industrial company experienced these disarrayed workflows: multiple mockup deliveries led to a two-month delay in deploying a business platform. Technical teams implemented outdated screens, resulting in massive front-end rework.

This case illustrates how the lack of centralization and traceability harms productivity and increases cognitive load on contributors. To learn more about delays in digital projects, see why your digital project is delayed.

Adopting a shared tool became imperative—not only to accelerate time-to-market, but also to reduce human error and strengthen consistency between design and development.

Figma as the Central Product Production Hub

Figma provides a single source of truth accessible via a link, eliminating local versions and drift. Its cloud-native, collaborative nature transforms the handoff into a continuous, transparent process.

Single Source of Truth

With Figma, a simple URL ensures all stakeholders view the latest mockups. Developers no longer need to download files or hunt for the correct version in an internal repository. This approach contrasts with the manual methods described in how to structure a high-performing software development team.

Updates are instantaneous, avoiding wait times and large file exchanges. A component or style change propagates immediately, ensuring constant synchronization.

Real-Time Collaboration

Figma’s collaborative workflow allows designers and developers to co-edit the same document, comment directly on graphic elements, and communicate in context. Misunderstandings from spec translation vanish.

Teams can discuss technical constraints during the UX design phase, adjust prototypes before formal sign-off, and avoid major downstream corrections.

This mindset fosters a culture of continuous improvement, where each iteration gains in accuracy and speed. Feedback loops occur within the tool itself, without adding friction points.

Organizational Impacts

A major Swiss financial institution adopted Figma to oversee the redesign of its multi-platform client portal. By breaking the document-mockup cycle, it reduced inter-team blockers by 30%.

This example shows how Figma acts as a living documentation center, capturing UX and technical decisions, and serving as the reference point for testing and continuous integration phases.

It demonstrates that beyond technology, Figma influences product governance, enabling fluid and agile coordination between design and development disciplines.

{CTA_BANNER_BLOG_POST}

Structuring a Continuous, Scalable Handoff

An effective handoff is part of an integrated cycle that extends far beyond the moment of delivering final mockups. It begins at product requirements definition and continues through implementation.

Engage Teams Early

To avoid technical surprises, involve developers during the UX exploration phase. They contribute insight on feasibility, performance, and potential regulatory constraints—such as when digitizing onboarding to minimize operational friction.

Early involvement helps anticipate risks, calibrate development efforts, and precisely define reusable components within the design system.

Organizing Files and Components

Each feature should have its own dedicated page in Figma, containing visual states, variants, and expected behaviors. This structure makes it easier for developers to locate and export assets.

A coherent naming system (for frames, groups, and components) mirrors the code hierarchy and reinforces alignment between design and software architecture. To dive deeper into best practices, see software architecture fundamentals.

Ongoing Development Support

The handoff doesn’t end with the mockup delivery. High-performing teams continue to comment and collaborate throughout development, refining visual and functional details in real time.

Regular reviews (for example during sprints) validate implementations directly in the browser, comparing the live output to the Figma prototype.

A healthcare organization in Switzerland structured this process for its internal application, reducing non-compliant sprint review feedback by 40%. This continuous collaboration strengthened trust between business and technical teams.

Leveraging Figma’s Advanced Features

Figma’s built-in tools speed up spec extraction, enrich documentation, and streamline integration into the development ecosystem. Mastering them is a productivity multiplier.

Integrated Comments and Documentation

Annotations on components specify technical constraints, interaction states, and expected behaviors. Teams retain a decision history in the visual context. To explore key UI components, check out our guide.

Interactive Prototypes

Instead of lengthy text descriptions, Figma prototypes simulate transitions, animations, and user flows, offering a concrete vision of the final product.

Developers can easily identify key interactions, measure desired latency, and faithfully reproduce the experience.

This approach drastically reduces ambiguity and enhances quality, especially for complex behaviors such as nested modals or loading states.

Integration with Ecosystems and Plugins

Figma connects with tools like Jira, Storybook, and Zeroheight, synchronizing documentation and tracking tasks. Assets can be automatically exported to Git repositories or testing environments.

Specialized plugins extract color codes, generate CSS/React snippets, and verify design-system consistency, ensuring continuous UI-to-code alignment.

Using these extensions enables a seamless transition to continuous integration and preserves a single product repository throughout the lifecycle.

Turning Your Design–Dev Collaboration into a Competitive Advantage

An effective handoff reduces time-to-market, limits rework, and improves product quality. By using Figma as a single source of truth and living documentation, teams establish fluid, traceable collaboration.

File structuring, early developer involvement, and interactive prototypes are best practices that drive fast, accurate UX/UI implementation. Plugins and integrations further boost operational efficiency by automating asset delivery and code guidelines.

Our experts are ready to design a tailored handoff strategy and help your teams adopt Figma as the central product production hub.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

AI-First Prototyping: Reduce Your Product Cycle Time by 30% with Generative AI Copilots

AI-First Prototyping: Reduce Your Product Cycle Time by 30% with Generative AI Copilots

Auteur n°15 – David

In a landscape where generative AI is struggling to achieve broad adoption in Switzerland, AI-first prototyping has emerged as a decisive lever to accelerate product validation and optimize design budgets.

By combining a large language model to define UX flows and hierarchies with a visual tool to generate mid-fidelity wireframes, you can reduce product cycle time by 30% and minimize late-stage iterations. This approach streamlines the journey from concept to actionable mockup, offering a strategic advantage to organizations eager to improve time-to-market and implement a more agile design process. It relies on open-source, modular, and secure principles—avoiding vendor lock-in and ensuring optimal business alignment.

Why Adopt AI-First Now?

AI-first prototyping can cut product cycle time by up to 30% and dramatically accelerate business validation. In Switzerland, where the industrialization of generative AI remains partial, early adopters gain a significant competitive edge.

Documented 30% Cycle-Time Savings

Numerous studies confirm that integrating AI at the prototyping stage significantly reduces the number of design iterations. Generative AI copilots automate UI drafts, freeing designers from repetitive structuring tasks.

By producing initial wireframe versions and offering layout variations, AI tools shorten the transition from ideation to an actionable mockup.

The result is faster delivery of testable prototypes, directly impacting the ability to iterate and refine the product before production.

First-Mover Opportunity in French-Speaking Switzerland

The Swiss market still shows measured adoption of generative AI in digital design processes. This intermediate maturity level presents a window of opportunity for organizations ready to invest in AI-first prototyping.

Early integrators of these technologies can offer differentiated user experiences and gain agility over competitors slower to transform.

By leveraging open-source, modular solutions, you avoid the pitfalls of vendor lock-in while rapidly upskilling internal teams.

Key Challenge: Rapid Validation and Fewer Late Iterations

Validating product hypotheses in the earliest days of a project avoids costly development adjustments. AI-first delivers an interactive prototype that lets you test concepts with end users before committing significant resources.

With near-instant wireframes and a click-dummy, feedback focuses on UX and priority features rather than aesthetic details.

Example: A mid-sized bank in French-speaking Switzerland validated a full proof of concept in 48 hours, demonstrating the decision-making speed enabled by AI-first prototyping.

Defining AI-First Prototyping

AI-first prototyping combines the power of a large language model to structure user flows and UX hierarchy with a visual engine that automatically generates wireframes. This synergy accelerates mid-fidelity mockup creation and ensures realistic content for user testing.

Logical Structuring with an LLM

Using a large language model such as ChatGPT allows you to map out user flows, identify jobs-to-be-done, and compile an exhaustive list of required screens and components.

By feeding targeted prompts, AI generates a logical interaction schema that clarifies the user journey and aligns business and design teams.

This structured textual output serves as the foundation for subsequent steps, ensuring functional consistency in the prototype.

Automatic Wireframe Generation

AI plugins for Figma convert prompts into mid-fidelity frames, offering multiple layouts in seconds. This step eliminates manual layout and component assembly.

Each frame represents a functional screen with a visually optimized hierarchy following UX best practices. Designers can then focus on refinement rather than initial construction.

This modular approach relies on pre-defined design tokens to maintain graphic consistency and speed up development handoff.

Content Integration for Rapid Testing

AI also generates copy, images, and contextual elements relevant to each component. Prototypes become immediately usable in user testing sessions.

Realistic content improves feedback quality, allowing early identification of improvement areas and avoiding last-minute tweaks.

Example: A mid-sized fintech firm in Switzerland produced an interactive prototype with realistic content in under an hour, showcasing AI’s ability to deliver testable mockups quickly.

{CTA_BANNER_BLOG_POST}

Optimized Workflow: From Prompt to Prototype

A structured prompt-to-prototype workflow can generate an interactive click-dummy in under 10 minutes. The prototype can be validated by Day 2, significantly reducing rework cycles and decision lead times.

Defining the Intent

The initial phase involves framing the intent along Who–What–Why dimensions. This approach guides screen generation and ensures the prototype’s functional relevance.

By specifying the target audience, business objectives, and priority use cases, AI has a clear framework to create user flows and associated content.

This step, often completed in minutes, structures the entire process and delivers overall coherence.

Automatic Wireframe Creation

From the defined intent, AI generates screen templates and identifies necessary components. Each element is described, placed, and linked to the corresponding user flow.

Designers then import these wireframes into Figma, where they can adjust styles, colors, and typography without starting from scratch.

This direct handoff prevents misinterpretations and limits iterations while ensuring comprehensive initial documentation.

Rapid Versions and Incremental Testing

Prototypes undergo sub-10-minute iteration cycles, allowing user feedback to be incorporated before a single line of code is written.

Each iteration targets a specific aspect of the journey, whether an interaction, component, or scenario.

This granularity accelerates decision-making and prevents a backlog of adjustments at project end.

Sprint 0 “AI Draft”

An AI-first Sprint 0, completed in half a day, quickly builds a library of prompts and design tokens for the project.

The AI draft is validated the next day before rapid user tests begin, then the roadmap is adjusted based on initial feedback.

Example: A Swiss health tech company reduced design costs by 28% by integrating this AI-first initial sprint, demonstrating the method’s budgetary and temporal impact.

Measurement, Security, and Ecosystem Integration

Data-driven management, security, and compliance are essential to industrialize AI-first prototyping in the Swiss context. AI governance and integration into a modular ecosystem become critical competitive differentiators.

Tracking Key Metrics

The idea-to-prototype cycle time is the main KPI, measured in days or hours to quantify speed gains.

Component reuse rate and designer hours per screen gauge the efficiency of the AI-first process.

An internal squad NPS tracks team satisfaction and provides continuous workflow quality management.

Data Masking and Governance

Masking sensitive data (PII) in prompts and AI outputs is a prerequisite for GDPR compliance and FINMA requirements.

Opt-out training settings ensure no client information is reused to train external models.

Systematic prompt logging and human validation of mockups ensure complete traceability and verifiable regulatory compliance.

Automated Documentation and Feedback

Automatic generation of specifications in JSON format streamlines the handoff to development teams and CI/CD pipelines.

AI-driven A/B testing predictive scores optimize backlog prioritization and improve conversion before deployment.

An integrated feedback loop with Jira turns user feedback into ready-to-develop stories.

Open-Source, Modular Approach

Using scalable open-source building blocks minimizes vendor lock-in and ensures prototyping adaptability to business specifics.

A modular architecture blends from-scratch development and AI plugins to preserve solution longevity.

Example: A public organization in French-speaking Switzerland delivered a multilingual mockup in 72 hours, showcasing process robustness and accessibility compliance.

Accelerate Product Validation with AI-First Prototyping

AI-first prototyping cuts product cycle time by up to 30% by combining a large language model to structure flows, visual tools to generate wireframes, and an ultra-rapid iterative workflow. Measurable metrics, rigorous security management, and open-source modularity ensure reliable industrialization in line with Swiss standards.

Our Edana experts are ready to help you implement AI-first prototyping tailored to your context, enabling faster launches, more accurate user testing, and tighter design budget control.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

The Secret of Successful Applications: Integrating Behavioral Science from the Design Phase

The Secret of Successful Applications: Integrating Behavioral Science from the Design Phase

Auteur n°15 – David

Applications that achieve real success rely not only on a robust architecture or an elegant interface.

They are built on a deep understanding of human behavior, from how attention disperses to the mechanisms of motivation and the subtlest cognitive biases. By integrating behavioral science from the design phase, it becomes possible to transform isolated features into lasting routines, reduce perceived friction, and create enduring emotional attachment. Beyond aesthetics and technical performance, it is the ability to guide every interaction that sets an ordinary app apart from a solution truly adopted and retained by its users.

Understanding Psychological Mechanisms

To design an engaging app, you must understand the limits of human attention and the principles of cognitive load. It’s also crucial to rely on proven motivation models and remain vigilant to biases that influence every decision.

Attention and Cognitive Load

User attention is a limited resource that every interface must preserve. Overcrowded screens or overly complex processes increase cognitive load, often leading to quick abandonment of a task or feature.

To lighten this load, it’s essential to break actions into short, coherent steps, use clear visual cues, and avoid engaging multiple perception channels simultaneously. This approach maintains engagement without exhausting mental resources.

By treating each screen as a micro-goal, you reduce the risk of disengagement and promote a smooth experience where the user feels guided rather than lost.

Motivation and the Fogg Behavior Model

BJ Fogg’s model describes behavior effectiveness as the product of three variables: motivation, ability, and trigger. If any of these elements is missing, the action won’t occur.

In practice, this means providing clear motivation (perceived value), facilitating execution (ease of use), and placing contextual triggers at the right moment. This triangulation maximizes the chances of activation and retention.

For example, a mid-sized Swiss financial services firm reworked its credit simulation process. By replacing a lengthy form with an interactive, guided-choice sequence, it saw a 35% increase in completed simulations—demonstrating that strengthening the trigger and simplifying the journey directly boosts engagement.

Cognitive Biases and Heuristics

Our daily decisions are governed by mental shortcuts, known as cognitive biases, which can either facilitate or hinder feature adoption. Scarcity bias, contrast effects, and social proof are powerful levers to enhance an element’s appeal.

Integrating these principles requires an ethical stance: the goal is not to manipulate users but to provide clear cues that help them make decisions aligned with their real objectives.

A design informed by these biases highlights essential information, structures choices understandably, and creates a flow that respects the user’s cognitive logic.

Using Nudges and Habit-Formation Patterns

Nudges are small incentives that subtly steer behavior without restricting freedom of choice. Combined with reward loops and habit-formation patterns, they firmly embed an app into daily routines.

The Power of Subtle Nudges

A nudge uses lightweight design to encourage a desired action. It might be a color that highlights the most beneficial option or a contextual message that reminds the user of a goal they set.

These micro-interventions work because they appear at key moments—when motivation is still high and friction is low.

By placing a well-timed nudge, the user is guided without feeling forced, fostering a sense of accomplishment rather than pressure.

Variable Reward Loops

Variable reward loops leverage the uncertainty of gratification to maintain interest and curiosity. Unlike fixed rewards, variability triggers small dopamine bursts with each interaction, encouraging repetition.

However, it’s important to avoid addictive patterns: rewards should align with business objectives, and the experience must remain meaningful for the user.

A mid-sized Swiss e-retailer introduced surprise point rewards for each evaluated interaction (review, share, purchase). This approach doubled the review submission rate in under three months, proving that symbolic rewards can sustain long-term engagement.

Contextual Personalization

Personalization must be transparent: briefly explaining why a suggestion or feature is offered builds trust and prevents a sense of intrusion.

By combining simple rule engines with lightweight algorithms, you can deliver a bespoke experience without relying solely on proprietary solutions, preserving flexibility and scalability.

{CTA_BANNER_BLOG_POST}

Optimizing Onboarding and Activation from First Use

The moment users arrive in an app is critical: effective onboarding turns a first trial into regular usage. Micro-interactions and immediate feedback maximize understanding and trust from the outset.

Progressive Onboarding and Transparency

Progressive onboarding prevents overwhelming users with information at first launch. It reveals features step by step, responding to expressed needs or actual navigation.

It’s vital to justify each permission request or configuration step to foster trust and optimize completion rates.

By clarifying concrete benefits early on (time savings, personalization, security), you reduce friction and drive sustainable engagement.

Engaging Micro-Interactions

Micro-interactions are small animations or visual cues that signal success, loading, or application states. They create a non-verbal dialogue that humanizes the experience.

When well-calibrated, they reinforce mastery and add a touch of delight at every stage.

A Swiss university redesigned its native mobile course registration app by integrating brief validation animations and subtle sound feedback. The result was a 40% reduction in input errors and a 25% increase in new user activation, demonstrating the direct impact of micro-interactions on operational efficiency.

Immediate Feedback Systems

Every user wants confirmation that their action was received. Immediate visual or haptic feedback prevents uncertainty and redundant attempts.

By clearly indicating progress (loading bars, green checkmarks), you sustain a sense of advancement and avoid frustration from waiting.

Feedback can also encourage exploration of other features by offering contextual suggestions based on initial actions.

Designing a Responsible, Non-Addictive Experience

Sustaining engagement should never come at the cost of counterproductive addiction. A responsible app respects user freedom and incorporates safeguards against over-engagement.

Balancing Engagement and Ethics

Persuasive techniques can quickly slide into undue exploitation of cognitive biases. It’s essential to establish an ethical framework where every lever is justified by genuine user benefit.

Overfrequent reminders or intrusive notifications damage trust and well-being, risking permanent disengagement.

By setting alert thresholds, you ensure responsible use that places the app’s performance at the service of user goals, not the other way around.

Protection Against User Overload

Feature bloat can create a sense of overwhelm. It’s crucial to maintain clear hierarchy and offer only what’s essential in each context.

A modular, scalable approach allows functions to be added without reinventing navigation, and modules to be activated or deactivated based on user profile and frequency of use.

By limiting simultaneous choices and providing streamlined paths for frequent tasks, you preserve attention and satisfaction.

Transparency and User Control

Providing information about collected data, recommendation algorithm mechanics, and fine-tuning options reinforces security feelings.

A simple dashboard to manage notifications or reminder frequency lets users personalize their experience.

This transparency builds trust, a sine qua non for long-term adoption and a balanced relationship between the user and the app.

Embrace Behavioral Science for Sustainable, High-Performing Applications

By accounting for attention limits, leveraging proven motivation models, and applying nudge and habit-formation techniques, any app can turn its features into habitual routines. Progressive onboarding, engaging micro-interactions, and immediate feedback maximize activation, while a responsible approach ensures an ethical, lasting user relationship.

Our experts at Edana are ready to guide you in designing digital experiences centered on human behavior, combining technical performance, open-source modularity, and respect for your users’ choices.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.

Categories
Featured-Post-UX-Design (EN) UI/UX Design EN

Mapping UX for Better Decisions: User Journey, Experience Map, or Service Blueprint?

Mapping UX for Better Decisions: User Journey, Experience Map, or Service Blueprint?

Auteur n°15 – David

In many digital projects, user experience (UX) is often the overlooked element: approached after functional and technical choices, it results in poorly adopted tools and fragmented journeys. UX mapping thus becomes a strategic lever to structure decisions and align business and IT.

By selecting the appropriate tool—Empathy Map, User Journey Map, Experience Map, or Service Blueprint—Swiss organizations can prioritize efforts where real value lies, reduce project risks, and optimize their digital transformation ROI.

Understanding the User with the Empathy Map

Identifying the user’s deep needs from the outset allows you to base decisions on facts rather than assumptions. The Empathy Map formalizes what the user says, thinks, feels, and does, revealing concrete motivations and frustrations.

Before defining features or design, the Empathy Map invites teams to document and share a common vision of the user. This visual tool serves as a foundation to avoid decisions based on preconceived ideas or internal routines. By framing actual feelings and behaviors, it fuels prioritization and design workshops.

Purpose and Scope of the Empathy Map

The Empathy Map aims to capture the emotional and behavioral dimensions of the user in a concise format. It involves gathering qualitative observations from interviews, user tests, or field feedback to build a vivid user portrait.

The result is a shared artifact, usable from the initial scoping and definition phases. It also serves as a reference throughout the project to ensure that functional and technical choices remain user-centered.

Within an organization, this tool helps reduce understanding gaps between business, UX, and IT teams. Everyone then has a common representation of user expectations and pain points to address as a priority.

Collaborative Creation and Facilitation

An Empathy Map workshop ideally brings together business leaders, designers, an IT department representative, and sometimes a sponsor. Diversity of profiles ensures all viewpoints are considered.

The facilitator guides the information gathering by asking targeted questions: What does the user say? What do they think? How do they feel and what actions do they take? Field insights support each quadrant.

At the end of the workshop, the deliverable is photographed, digitized, and integrated into the project kickoff kit. It forms the basis for User Journey Maps and prioritization choices, ensuring a genuinely user-centered approach.

Example from a Field Service SME

An SME specializing in industrial equipment maintenance used an Empathy Map to clarify the frustrations of its field technicians. Before this exercise, specifications for the future mobile tool were based on a very generic business requirements document.

The Empathy Map highlighted a critical need: the fear of losing inspection data in areas with limited network coverage. Technicians felt stressed while entering data, which prolonged their intervention times.

This insight led to prioritizing an automatic synchronization and local backup feature at the prototype stage. The project thus avoided costly rework at a later stage and ensured rapid adoption by field users.

Optimizing Each Step with the User Journey Map

Mapping a focused journey allows you to identify touchpoints, breakpoints, and improvement opportunities. The User Journey Map serves as a decision-making tool to determine which features are truly useful and streamline the experience.

The User Journey Map focuses on a specific path defined by a user goal (signing up, requesting a service, making a purchase). It details each step, from awareness to conversion or need resolution.

Clear Definition of Objectives and Personas

Before tracing the journey, it is essential to define the relevant persona and the critical step to optimize. This step ensures that the mapping remains pragmatic and manageable.

The persona combines insights from the Empathy Map with quantitative data: usage volumes, bounce rates, time on task. The journey’s objective is then contextualized with business metrics.

This precision calibrates the level of detail: which interactions to track, which measurement tools to use, and which alternative scenarios to consider (browsers, devices, usage contexts).

Analysis of Touchpoints and Frictions

The User Journey Map identifies key moments when the user switches channels, interacts with a system, or invokes an internal manual process. Each touchpoint is compared with expectations and observed issues.

Frictions are noted along with their impact on conversion or satisfaction: complex fields, overly long pages, uncontrolled response times, functional gaps. These irritants become prioritization criteria.

The mapping also reveals quick-win optimization opportunities: form simplification, automated confirmation email, consolidation of internal processes, etc.

Example from an Insurance Cooperative

An insurance cooperative created a User Journey Map around submitting an online claim. Until then, the process involved five successive forms, causing drop-offs at the third step.

The mapping revealed that most policyholders did not have all supporting documents on hand and returned backward to retrieve them, increasing clicks and delays.

Following this analysis, the cooperative merged two forms and added contextual messages listing required documents from the first step. The completion rate increased by 20% within the first week of deployment.

{CTA_BANNER_BLOG_POST}

Gaining Perspective with the Experience Map

The Experience Map broadens the view beyond a single journey to cover the user’s overall experience. It uncovers systemic irritants and helps prioritize initiatives based on their cross-functional impact on satisfaction and productivity.

Unlike the User Journey Map, the Experience Map incorporates all lifecycle phases and channels used: websites, mobile apps, call centers, retail outlets, post-service processes. It provides a holistic vision.

Multichannel Approach and Timeline

The Experience Map is structured along a timeline: pre-sale, sale, onboarding, follow-up, and loyalty. Each segment includes touchpoints and associated emotions.

Recurring irritants are highlighted where they disrupt multiple stages: long delays, missing information, inconsistent navigation across channels, file transfer difficulties.

This view groups irritants by theme and assesses the overall impact of an initiative, whether technical, organizational, or relational.

Prioritizing Initiatives by Overall Impact

With the Experience Map, prioritization no longer depends solely on the criticality of an isolated feature but on the domino effect across the entire experience.

For example, automating support can reduce response times and boost customer satisfaction at every lifecycle stage, whereas redesigning a single form would have localized impact.

Mapping allows estimating ROI by combining satisfaction impact, incident costs, and internal productivity gains.

Example from a Public Transport Network

A public transport network created an Experience Map for the entire traveler journey, from initial planning to subscription management and complaints handling. Digital and physical journeys were previously managed independently.

The mapping revealed a breakpoint: the mobile app did not reflect traffic incidents reported in the call center, resulting in duplicate calls and a high volume of unreliable tickets.

Management launched a cross-channel initiative to synchronize the incident back office with the app and website. Redundant calls fell by 35%, and overall satisfaction improved.

Operational Alignment with the Service Blueprint

The Service Blueprint connects the user-visible experience to back-office processes, unveiling dependencies and bottlenecks. It is a key tool to align UX with operational reality and ensure service sustainability.

By mapping front-stage interactions and backstage activities simultaneously, the Service Blueprint highlights each team, system, and internal tool’s contribution. It clarifies the efforts needed to support the promised experience.

Front-Stage and Back-Stage Visibility

The front-stage layer lists all user actions: screens viewed, forms completed, phone calls, physical interactions. Each moment is annotated with its technical medium.

The back-stage layer describes associated internal processes: data entry, system handovers, hierarchical approvals, automated scripts, manual interventions.

This dual visualization exposes points where a user incident hides an internal issue: API friction, support team overload, lack of alerting in certain workflows.

Mapping Interactions and Dependencies

Each step is linked to an internal owner, a tool, and an SLA. Bottlenecks are identified where workload or delays exceed expected thresholds.

The Service Blueprint facilitates priority negotiations among IT, business, and support by objectifying process costs and impacts on the user promise.

Example from a Hospital Association

A hospital association developed a Service Blueprint for the appointment and billing process. Clinical, administrative, and IT teams had been operating in silos.

The mapping revealed a daily manual loop for transferring data between the scheduling software and billing system, causing errors and excessive processing times.

A microservice architecture was deployed to automate this transfer using open-source technologies and a RESTful API design. Billing time dropped from three days to a few hours, while error rates decreased.

Turn UX Mapping into a Competitive Advantage

By investing in UX mapping as a management tool, organizations align their decisions with concrete user data rather than assumptions. Each method—Empathy Map, User Journey Map, Experience Map, Service Blueprint—addresses a specific need and helps decide what to simplify, automate, or rethink.

In an environment demanding reliability, service quality, and cost control, these tools help move beyond subjective debates, prioritize initiatives, and optimize digital solution adoption.

Our Edana experts guide you in selecting and implementing the most relevant UX mapping approach for your organization, combining open source, modularity, and contextual insight.

Discuss your challenges with an Edana expert

PUBLISHED BY

David Mendes

Avatar de David Mendes

David is a Senior UX/UI Designer. He crafts user-centered journeys and interfaces for your business software, SaaS products, mobile applications, websites, and digital ecosystems. Leveraging user research and rapid prototyping expertise, he ensures a cohesive, engaging experience across every touchpoint.