Author: Manu

  • Post-Launch: How Do You Know If Your Feature Actually Worked?

    Post-Launch: How Do You Know If Your Feature Actually Worked?

    Your feature has now been launched to all one million high-net-worth clients.

    But how do you know whether it solved the problem you set out to solve?

    You collected feedback from users during the pilot phase, and that confirmed the feature worked and that clients could use it.

    However, pilot testing involves only a small number of users.

    To determine whether the feature actually solved the problem, you need to see how many more high-net-worth clients use the feature after launch.

    You do this in two ways. You measure the metrics that were defined earlier, and you track the Customer Effort Score.

    Together, these signals show whether the feature solved the problem that was causing churn.

    Give users time to use the feature before measuring the metrics

    You cannot measure the impact of a feature immediately after launch because clients need time to discover the feature and start using it.

    You usually wait about thirty days after launch before measuring the first set of results. This gives enough time for clients to start using the feature and for you to collect enough usage data to see meaningful patterns.

    After that, you measure the same metrics again at sixty days and ninety days.

    Tracking the metrics at 30, 60, and 90 days shows whether clients continue to use the feature and whether the improvements last over time.

    Comparing the metrics before and after launch

    The simplest way to understand whether the feature worked is to compare how clients contacted their RMs before the feature existed and how they contact them after it launched.

    For example, if the feature went live on June 1, 2026, you measure the metrics on July 1, 2026, and compare them to the same period one year earlier on July 1, 2025.

    Feature after and before launch

    Before the feature existed, clients contacted their relationship managers through other channels, such as direct phone calls or emails outside the app.

    The table above compares the same KPIs before and after the feature launched.

    After launch, more clients reached their relationship managers through the app, more requests were handled within the expected SLAs, and fewer cases required escalation. Net asset outflow per client began to decline, and as a consequence, churn also declined.

    These changes show that the feature has reduced the friction that previously prevented clients from reaching their relationship managers.

    Apply segmentation to the metrics

    Looking at the metrics for all high-net-worth clients is not enough. You also need to know how these metrics change for your other major segments.

    Earlier in the series, you identified that traveling retired business owners were churning more than other segments. You need to see whether the feature is solving the problem for this segment. You also need to see how the feature impacted your major client segments like retired business owners and passive heirs, who account for the majority of your clients.

    You analyze the data across all your major client segments:

    • All high-net-worth clients. 
    • Retired business owners (domestic). 
    • Retired business owners (traveling). 
    • Passive heirs (domestic). 
    • Passive heirs (traveling).

    For example, the overall metrics might show great improvements across the entire client base. But when you examine the metrics for traveling retired business owners, you might discover that their escalation rates are still high or that their requests are still taking longer to resolve. If that happens, the original problem still exists for that segment.

    Segment-level analysis helps you see if the feature is working for all segments or if traveling retired business owners are still experiencing the same friction that was causing them to churn. If their metrics haven’t improved, you know where to focus additional improvements.

    Measuring satisfaction using Customer Effort Score

    The metrics show how clients used the feature, but they don’t show how easy it was for clients to use it.

    To measure that, you track the Customer Effort Score.

    Customer Effort Score measures how easy it was for a client to complete a task. In this case, the task is reaching their relationship manager when they need help.

    You measure CES with a single question: 

    “How easy was it to reach your relationship manager using this feature?” 

    Clients respond using a five-point scale.

    Score     Scale
    1   Very difficult
          2      Difficult
          3       Neutral
          4       Easy
    5 Very easy

     

    If most responses fall in 4 or 5, the feature is reducing friction. If responses cluster around 2 or 3, the experience still needs improvement.

    To ensure meaningful responses, you measure CES only for users who have used the feature at least twice.

    Just like the metrics, CES should also be analyzed by segment. This means you review CES scores for:

    • all high net worth clients
    • retired business owners (domestic)
    • retired business owners (traveling)
    • passive heirs (domestic)
    • passive heirs (traveling)

    Now the question is: how do you collect CES responses from one million clients?

    Collecting CES responses at scale 

    Since one million high-net-worth clients use the feature, feedback must be collected at scale.

    You do this through an in-app survey that appears after a client has used the feature. For example, after a client contacts their relationship manager through the app two or more times, the app displays the Customer Effort Score question.

    Survey software makes this easier. It lets you collect responses directly inside the mobile app, calculate CES scores automatically, and analyze results across different client segments.

    You collect enough responses from each segment to understand how that group feels about the feature. For example, you might collect 3,600 responses from retired business owners and 2,500 from passive heirs.

    If CES scores are high across all segments, it indicates that clients find the feature easy to use regardless of their profile or travel status.

    Did the feature solve the problem?

    When these metrics improve together, the answer becomes clear.

    More clients are reaching their RMs through the app. The bank is handling more requests within SLA, fewer cases are escalating, asset outflows are stabilizing, and churn is declining. 

    This shows that the feature is solving the problem that caused clients to leave in the first place.

    This is the final piece of the fintech puzzle.

    The work started with identifying which clients were churning and why. Client interviews uncovered the root cause. The solution was designed within real technology and compliance constraints, built with multiple engineering teams, and verified using real client data.

    When all five pieces come together in the right order, the result isn’t just a new feature. It’s a solution to a real customer problem, and that’s what drives growth in high-net-worth banking.

  • Pilot Testing: The Final Step Before Launch

    Pilot Testing: The Final Step Before Launch

    Your feature has now been built.

    It has passed testing in both lower and higher environments. From a technology perspective, everything is working fine.

    But there is still one important step left. The feature has not been tested by real clients.

    That is what pilot testing is for.

    What’s pilot testing?

    Pilot testing means releasing a pilot version of the app to a small group of users before it is rolled out to all 1 million high-net-worth clients.

    These users interact with the feature exactly as they would after launch. They call their relationship managers through the feature, send emails, and see what happens when the relationship manager is unavailable. At the same time, the relationship managers and backup support teams receive those requests exactly as they would once the feature goes live.

    Pilot users test the scenarios outlined earlier in the pre-launch plan. You want them to test all the scenarios so that nothing is missed.

    Invite clients from your key segments

    You invite high-net-worth clients from your main client segments.

    The most important segments are retired business owners and passive heirs, since these two groups represent roughly 70% of the high-net-worth client base.

    If possible, you can also invite clients from other segments, such as entrepreneurs and professionals.

    Getting feedback from different segments helps you understand how different types of clients react to the feature before it goes live to the entire client base.

    The challenge of recruiting pilot participants

    Recruiting high-net-worth clients for pilot testing is not always easy. These clients are extremely wealthy, busy, and often have limited time.

    The best way to invite participants is through relationship managers. Relationship managers know which clients have strong relationships with the bank and may be open to participating in the pilot.

    If it is hard to recruit high-net-worth clients for pilot testing, another alternative is to invite internal employees who are also high-net-worth clients of the bank. These internal clients are usually senior managers or executives. They are easier to coordinate with and can still provide meaningful feedback.

    What happens during pilot testing

    Pilot testing often takes place at the bank’s office.

    Pilot users receive devices with the pilot version installed, or they install the pilot version on their own phones. They then begin testing the feature while the product team observes how they use it.

    Whenever possible, you should sit in the room while pilot users test the feature so you can see how they interact with it. Watch how they navigate the screen, where they tap, and where they hesitate. These observations often reveal usability issues that would be difficult to detect through analytics alone.

    Analytics can show what actions users take, but watching users directly helps you understand why they behave the way they do.

    For example, a client may hesitate before tapping the contact button or spend time trying to understand what the feature does. These signals help you identify usability issues before the feature goes live to all clients.

    Asking for feedback

    After the testing session, ask a few simple questions.

    What do you think about this feature? Was anything confusing? What would make this better? Would you use this when traveling?

    You are collecting qualitative feedback from real users who represent a sample of your audience. This feedback is extremely valuable. You document the key observations and then send a short update to leadership summarizing what you learned during the pilot phase.

    This is only the first round of feedback. Once the feature goes live, you will collect feedback from a much larger group of clients.

    When to move forward

    Pilot testing uses only a small number of users, so the goal is not to collect extensive feedback. The goal is to confirm that the feature solves the core problem.

    In this case, the problem was simple. Clients could not reach their relationship managers when the relationship managers were unavailable.

    During the pilot, you verify that clients can successfully contact their relationship managers using the feature and that the backup team supports the client when the relationship manager is unavailable.

    Once you confirm this, the feature is launched to high-net-worth clients in phases. It first goes live to 10% of the one million clients, followed by 25%, then 50%, and finally 100%.

    What have you learned?

    Pilot testing confirms that the feature works with real users before it reaches the entire client base.

    You invited clients from your key segments, observed how they interacted with the feature, and collected early feedback. Most importantly, you confirmed that clients could reach their relationship managers through the feature when they needed assistance, and that the backup team responded promptly when the relationship manager was unavailable.

    Once pilot testing is complete, the feature moves to the next stage: launching to high-net-worth clients in phases.

    At that stage, you begin monitoring adoption, response times, and business impact.

  • How To Estimate Delivery Timelines For Your Fintech Feature

    How To Estimate Delivery Timelines For Your Fintech Feature

    Your engineering managers have provided their initial estimates for your fintech feature.

    They look something like this:

    • Mobile app development: 2–3 months
    • Microservices changes: 1 month
    • Call routing updates: A few weeks

    These are high-level estimates.

    Engineering managers typically estimate based on the team’s experience, current capacity, and similar features delivered in the past.

    That’s normal.

    But here’s the problem.

    They don’t see every nuance involved in building this specific feature.

    They’re not thinking about things like:

    • Edge cases in the UI
    • Missing API fields
    • Analytics integration
    • Testing scope in each sprint

    So who helps you understand those nuances?

    Your core team.

    Who is your core team?

    Your core team is the group of individual contributors who will actually build the feature.

    They include developers and testers from the following teams :

    • App development
    • Backend (microservices) development
    • Database engineering
    • Data engineering
    • IVR development (as call routing is involved)

    They report directly to the engineering managers.

    Your job is to work with them directly, break the work into smaller chunks, and give leadership realistic timelines.

    Let’s see how to do that.

    Step 1: Give context before estimating

    Before you talk about timelines, walk your core team through the Product Blueprint. Show them who is churning, what problem you’re solving, why this feature exists, and what the final designs look like.

    When developers understand the business problem, they think differently. They identify edge cases earlier, question assumptions, and anticipate risks.

    They are also more motivated because they see the bigger picture.

    Step 2: Prepare your timeline estimation sheet

    Before meeting with each team, prepare your planning sheet.

    Here’s what it looks like.

    The sheet has sprints along the top and teams along the side.

    A sprint is a fixed time period, usually two weeks, during which a team commits to completing a defined chunk of work.

    Always estimate using working days, not calendar days. Two weeks typically means 10 working days.

    Each team is color-coded in the sheet. When you capture a dependency, you highlight it in that team’s color so it’s immediately clear who is waiting on what. This makes dependencies visible rather than buried in meeting notes.

    Step 3: Meet with each team separately

    You schedule separate estimation sessions with each team.

    Let’s start with the mobile app team.

    You show the iOS and Android teams the final design and ask:

    “If you were building this screen, what would you build first? What would you build next?”

    You ask both developers and testers this question together.

    They might respond like this:

    • Sprint 1: Build layout
    • Sprint 2: Add buttons and interactions
    • Sprint 3: Integrate backend API
    • Sprint 4: Add analytics tracking

    Developers and testers give you one combined estimate per sprint that includes both build time and testing time.

    Each sprint must end with something that is both built and tested.

    If a sprint has 10 working days, developers might build for 7 days and testers might need 3 days. Testing is not extra time after the sprint. It is part of the sprint.

    At the end of each sprint, you demo the completed work to stakeholders. It’s a great way to communicate progress and keep everyone aligned.

    As you can see in the sheet, this is also where dependencies get captured. For example, the mobile app team needs analytics codes before Sprint 3 starts. You add that to the “Dependency” row and color-code it to match the mobile app team.

    You’ll also notice in the sheet that the later sprints have capacity reserved for higher environment testing and pre-launch. This is important. Higher environments are where the mobile app and all the backend systems connect and get tested together. Pre-launch is where pilot users test the feature before it goes live. Defects will come up in both stages. If you don’t reserve capacity to fix them, your timeline will quietly extend.

    Backend, data, database, and call routing teams

    You repeat the same process with:

    • Backend (microservices) engineers
    • Data engineers
    • Database engineers
    • Call routing (IVR) developers

    We covered what each of these teams does in the previous article, so we won’t go through them again here. The process is the same: ask what they would build first, slot it into sprints, and capture any dependencies. All of it gets added to the sheet.

    One final alignment meeting

    Once you’ve completed the estimation sessions with every team, you schedule one final meeting with your entire core team and their engineering managers.

    You walk everyone through the sheet and call out the dependencies between teams. The goal is to make sure everyone is aligned. If the engineering managers’ estimates are too far off from what the core team estimated, that needs to be discussed and resolved in the room.

    You communicate timelines to leadership only after everyone is aligned.

    Add a buffer to your timeline

    A timeline is always an educated guess.

    The accuracy of that guess depends on two things. How skilled the developers are. And whether they’ve built something similar before. If the team has done this kind of work before, their estimates will be more accurate. If not, they might need to run a quick proof of concept first, as we saw earlier, to arrive at a rough estimate.

    That’s exactly why you add a buffer.

    Add two to three sprints depending on the complexity of the feature. Something always comes up in software delivery. No matter how carefully you plan, something can shift during development. Buffer is what protects your timeline when things change.

    Present to leadership and get alignment

    Once you’ve finalized the timeline with your core team and engineering managers, you present it to leadership.

    This is similar to how you presented the Product Blueprint, the wireframes, and the metrics framework. You walk leadership through the timeline, explain how the work is broken down into sprints, and show where dependencies exist.

    Get alignment in this meeting. If leadership has questions or concerns about the timeline, you need to address them now before development starts.

    Only after you have alignment do you move to the next step: starting development.

    From rough estimates to a credible timeline

    You started with rough estimates from your engineering managers.

    Then you worked closely with each team to break the work into smaller chunks and slot everything into sprints. You captured dependencies and reserved capacity for higher environment testing and pre-launch. You added a buffer because something always comes up during software delivery. And you presented the timeline to leadership and got their alignment.

    All of it is documented in one sheet.

    This marks the end of Piece 4 of the fintech puzzle: The Solution Build.

    Before development starts, you update Piece 4 of the Product Blueprint with the delivery timeline sheet and the KPIs you defined earlier.

    Once the teams start building, your job is to guide them through each sprint as they complete their work. At the end of each sprint, you demo the completed work to key stakeholders. Usually, someone from the leadership team represents those demos. You track progress, resolve blockers, and ensure dependencies are met on time.

    The next phase is Piece 5 of the fintech puzzle, Launch and Iteration. That’s where we’ll look at what activities to do pre-launch and post-launch.

  • How Enterprise Fintech Software Gets Built And Released In Stages

    How Enterprise Fintech Software Gets Built And Released In Stages

    Low - fi wireframe_Fintech

    Designers have built the final screen. Leadership has approved them.

    The software architect has created the architecture diagram and defined the API contract, which specifies exactly what data the mobile app will request and what the backend will return.

    You’ve also seen how multiple teams use different technologies to build our feature.

    You’ve also decided the metrics.

    So now it’s time to build it and launch it, right?

    Not exactly.

    Because in enterprise fintech, you’re never building a feature in isolation.

    That feature has to pull real relationship manager (RM) data. It has to integrate with backend systems. It has to route calls to the RM. And it has to work seamlessly alongside every other feature inside the mobile app.

    That’s why enterprise software isn’t built and released in one shot.

    It’s built, tested, and launched in stages.

    Let’s walk through what those stages look like.

    Stage 1: Lower Environments

    An environment is a separate copy of the software where teams can build and test without impacting real customers. These environments are hosted in the cloud. The cloud is remote servers managed by providers like AWS or Azure.

    Think of it as a rehearsal room. You wouldn’t test a new play in front of a live audience. You rehearse first.

    In lower environments, each team builds its part independently.

    Let’s see what each team does and why they work in separate environments.

    Mobile app developers build the screen

    The layout. The RM photo placement. The name and title fields. The call and email buttons.

    Once the developers build the screen, the testers test it on both iOS and Android.

    They check: Does the layout look correct? Does the image load properly? Do the buttons respond?

    If something doesn’t match the final design, it goes back to the developers to fix.

    They use sample data because at this stage, the backend systems aren’t connected to the app yet. For example, a sample RM photo, a name like “Jane Smith,” and an email address.

    At the same time, backend engineers build the microservice

    Backend engineers build the microservice that will return RM data following the API contract. They ensure it returns the exact fields the mobile app expects: RM name, title, photo URL, email, and phone number.

    Backend engineers also use sample data. The microservice returns sample RM information—a test name, a test photo URL, a test email—so they can verify it works before connecting to real customer data.

    Data engineers prepare the pipeline that pulls RM information into the data warehouse. Database engineers ensure the required data fields exist.

    Why work in separate environments?

    To allow work to be done in parallel.

    If all teams worked in one environment, one team would block another. The mobile app team would have to wait until the backend microservices and data pipeline are done.

    In lower environments, the mobile app works with sample data only.

    When you tap “Call,” it doesn’t route through the actual call routing system. When you tap “Email,” it doesn’t notify the actual backup team.

    Stage 2: Higher Environments

    Once each team finishes building in their lower environment, the work moves to a shared integration environment.

    This is where everything connects.

    Now the RM screen calls the actual backend microservice, pulls actual RM data, connects to actual call routing logic, and sends actual emails.

    This is where things often break.

    The mobile app might expect the backend to return “RM_Photo,” but the backend returns “Manager_Photo.” Or the data pipeline takes 30 seconds to update when the app expects it in 5 seconds.

    Small mismatches like these break the feature.

    A dedicated testing team handles this stage. Much of the testing is automated.

    The automated tests check if the RM photo loads, what happens if the RM has no photo, whether calls route correctly when the RM is unavailable, and if the backup team gets copied on emails. They also test if the feature integrates properly with other features in the app.

    When tests fail, the issue goes back to the relevant team to fix, rebuild, and retest. This cycle can repeat multiple times.

    Once all the integration tests pass, the feature is ready for pilot testing.

    Stage 3: Pre-Launch

    Once integration testing is complete, the feature moves to pilot testing.

    Real users test the app. They’re called pilot users.

    You invite a small group: relationship managers and a few high-net-worth clients who volunteer.

    What do they test?

    They call their RMs. Send emails. See what happens when the RM is unavailable. They test different scenarios to find issues before launch.

    This phase usually lasts a day or two, but it happens 10 to 15 days before the actual launch.

    Why the gap?

    Because if something breaks, you can stop the feature from going live before one million clients see it.

    It’s much easier to delay a release than to roll back a live feature that everyone has already seen.

    Stage 4: Launch

    Even at launch, your feature doesn’t go to 100% of users immediately.

    Enterprise fintech releases roll out in phases.

    For example, your RM screen might first go live to 10% of high-net-worth clients. Then 25%. Then 50%. Then 100%.

    Why?

    Because this is the first time the feature is being used by real clients at scale.

    If something unexpected happens at 10% (a spike in errors, performance issues, incorrect routing), you can pause the rollout, fix the issue, and restart.

    That’s far safer than rolling out the feature to one million clients all at once.

    There’s something else important.

    Your RM feature isn’t launching alone.

    When clients download the latest version of the app, they’re not just getting your feature. They’re also getting performance improvements, security updates, bug fixes, and other features built by different teams.

    All of these changes are released together as one app update.

    Why fintech features go through such a rigorous release process

    You might be wondering: why all these stages? Why not just build it and launch it?

    Because fintech apps deal with real money.

    Imagine what happens if a bug miscalculates a client’s account balance. A high-net-worth client logs in and sees $500,000 instead of $5,000,000. They panic. They call their relationship manager. 

    The relationship manager escalates to leadership. 

    Now you’re explaining to leadership why a client’s reported net worth dropped by millions overnight.

    That’s why enterprise fintech companies build, test, and launch in stages.

    You’ve seen how this works.

    Lower environments where teams build independently. Higher environments where everything connects. Pilot testing where real users validate the feature. And phased rollout where you launch to 10% of high-net-worth clients, then 25%, then 50%, then 100%.

    What started as “let’s build and launch” is now a four-stage release process that protects customers’ money, prevents compliance violations, and ensures the feature actually works before one million high-net-worth clients see it.

    Next article: How To Estimate Delivery Timelines For Your Fintech Feature

  • How To Choose The Right Metrics For Your Fintech Feature

    How To Choose The Right Metrics For Your Fintech Feature

    Designers have built the final screen, and leadership has approved it. The software architect created the architecture diagram and defined the API contract. The contract defines what data the mobile app can request and what the backend must return.

    This marks the beginning of Piece 4 of the fintech puzzle: The Solution Build.

    At this stage, you share feature delivery timelines with leadership and begin building the mobile app screen.

    But before development starts, you must define how you will measure success. In other words, you must define your metrics.

    It’s pretty simple because you’ve done most of the heavy lifting during the first three pieces of the fintech puzzle. All you have to do now is turn that work into metrics.

    Here’s what you know from the Product Blueprint

    Churn in the high-net-worth division increased by 10% quarter over quarter. In a Canadian bank serving 1 million HNW clients, that means roughly 100,000 clients leaving within 90 days.

    Seventy percent of that churn came from retired business owners.

    Many had recently become Canadian snowbirds, spending winters in the US in places like Florida or Arizona. They were building a second life, setting up a second home, and managing large cross-border expenses. That required moving significant funds between Canada and the US.

    Large transfers required approval from a relationship manager (RM) due to bank limits and cross-border regulations.

    When the RM was unavailable, no backup relationship manager was officially assigned. The support team lacked the authority to approve the transfer and told the client to wait. The client did not wait and moved the money through another bank.

    To address this problem, you designed a solution.

    Low - fi wireframe_Fintech

    Primary metric: Did clients successfully reach their RM?

    The core outcome high-net-worth clients want is access to their relationship manager when they need help. This was especially critical for traveling retired business owners who needed to transfer funds.

    Your first metric measures whether clients achieved that outcome:

    % of HNW clients who successfully reached their RM through the app

    “Successfully reached” means the client sent an email or initiated a call through the app.

    If 150,000 out of 1,000,000 clients contacted their RM this month, the contact rate is 15%.

    This metric tells you whether clients can reach their RM when they need help.

    Response metric: Did clients receive timely help?

    Reaching the RM is not enough. Clients need timely responses from the RM or the backup team.

    Leadership defines response expectations in Service Level Agreements (SLAs). For example, emails must be answered within 2 hours and calls must be returned within 10 minutes.

    So you measure:

    % of initiated contacts responded to within the SLA

    If clients initiated 10,000 contacts and the bank responded to 8,200 within the SLA window, the response rate is 82%.

    Track escalations 

    From your interviews, you know what happens when RMs are unreachable. Clients contact the general support line because they don’t know who the backup relationship manager is.

    So you track:

    % of clients who called general support because they could not reach their RM

    A high escalation rate signals friction in the workflow, even if the primary contact metric looks strong.

    Churn and its early indicator 

    Churn is the ultimate business metric, but it is also a lagging indicator. By the time churn appears in your reports, the client has already left the bank.

    This happens because high-net-worth clients rarely close their accounts immediately. They typically move their assets gradually before making the final decision to leave. As a result, churn data alone does not tell you early enough whether your feature is working.

    To detect changes sooner, you track an early indicator of churn.

    In high-net-worth banking, the most reliable early signal is net asset outflow. When clients begin losing trust in the bank, they usually start by transferring part of their assets elsewhere. They may move funds to another institution while keeping their account open for some time.

    For that reason, you closely monitor the percentage change in net asset outflow per client.

    % change in net asset outflow per client

    This metric measures how much money clients move out of the bank relative to their historical baseline. For example, if a client typically keeps $2 million with the bank and transfers out $500,000, that represents a 25% asset outflow.

    When asset outflows begin to decrease after a feature launch, it suggests that fewer clients are starting the process of leaving the bank. In other words, trust is stabilizing.

    Alongside this early signal, you still track churn itself.

    % churn

    This measures the percentage of clients who actually close their accounts during a given period. For example, if 100 out of 1,000,000 clients close their accounts in a month, the churn rate is 0.01%.

    Because churn reflects the final stage of client departure, it takes longer to change.

    When asset outflows decline first, and churn later decreases, it provides strong evidence that the feature is addressing the root cause of the problem.

    Apply segmentation and travel filters

    You track all the metrics above for all high-net-worth clients, and then do the same for retired business owners and passive heirs. These two segments represent nearly 70% of your client base.

    Within each segment, you apply a travel filter. So you end up with these views:

    • All high-net-worth clients 
    • retired business owners (domestic) 
    • retired business owners (traveling)
    • passive heirs (domestic) 
    • passive heirs (traveling)

    If 90% of all HNW clients reach their RM but only 62% of traveling retired business owners do, the core problem remains unsolved for the key segment driving churn.

    Where does the data come from?

    In most banks, the data required for these metrics lives across multiple systems.

    The primary contact metric comes from mobile app event tracking. The response metric comes from the CRM system that relationship managers use to manage client requests. Asset outflow data comes from the bank’s transaction systems.

    To combine these sources, you collaborate with technology and analytics stakeholders. The technology team integrates the data into a centralized data warehouse, while the analytics team builds dashboards that present all metrics in one place. This allows leadership to monitor the feature’s performance without relying on multiple reports.

    Before presenting the metrics framework to leadership, confirm how each metric will be captured. Some metrics may already exist in current systems, while others may require additional work to collect or integrate the data.

    For example, the mobile app may need new event tracking to capture when a client taps the call or email button. The CRM system may need additional tags to record whether a request was escalated to general support. Travel indicators may require capturing login IP locations or cross-border transaction flags.

    Each of these changes requires coordination with engineering and analytics teams. If new tracking or integrations are required, that work must be scoped and added to the development plan before delivery timelines are communicated to leadership.

    Leadership review

    Once you validate the metrics with technology, walk leadership through the full measurement plan. Present it the same way you presented the Product Blueprint and the wireframes.

    Explain what each metric measures, why it matters, how it will be tracked and answer any questions they have.

    Only after you secure their buy-in should you move to the next step of the solution build: estimating delivery timelines.

    From Product Blueprint to measurable metrics

    The core problem was simple. Clients couldn’t reliably reach their RM. So you measured the percentage of HNW clients who successfully contacted their RM through the app.

    But reaching out isn’t enough. If no one responds, the problem still exists. So you measured the percentage of contacts that responded within the SLA.

    When clients can’t reach their RM, they escalate to general support. So you tracked the percentage of clients calling general support because they couldn’t reach their RM.

    And since clients move assets before they formally churn, you monitored asset outflow as an early indicator and tracked churn rates to see if fewer clients were leaving the bank.

    That’s how metrics are defined. Not by brainstorming metrics, but by translating the Product Blueprint into measurable outcomes that tie directly to the customer problem and the business impact.

  • Why You Must Review the Wireframe with Leadership Before Moving To Final Design

    Why You Must Review the Wireframe with Leadership Before Moving To Final Design

    Technology validation answers one question: Can we build this wireframe?

    Leadership review answers a different one: Should we build this exactly as it is?

    That difference matters.

    Because leadership isn’t reviewing APIs or routing logic.

    They’re reviewing:

    • Does this solution directly address the churn problem we identified?
    • Are there any compliance or regulatory risks in this design?
    • Does this deserve priority over other features right now?

    If you don’t run this meeting properly, your simple feature can quietly turn into something much bigger than you planned.

    Here’s how to structure the leadership review so that doesn’t happen.

    Who should be on this call?

    You’ll have the same group of stakeholders who attended the Product Blueprint Review meeting.

    • Business Leadership (leaders from Wealth or HNW Banking)
    • Product Leadership (leaders from Product, Design, and Engineering)
    • Dependency Teams (Compliance, Marketing, Client Support)

    In addition to these groups, you also have your technology stakeholders who attended your last meeting.

    Walking through the wireframe

    Similar to the last meeting, you ask your designer to walk everyone through the wireframe.

    Low - fi wireframe_Fintech

    The designer explains both call routing options.

    1) Option A: RM call forwarding (best experience)

    Calls go directly to the RM when they’re available. When they’re not, calls are routed to an internal support queue. From the client’s point of view, nothing really changes. It’s still one number. One simple flow.

    2) Option B: Call-to-message fallback (simpler)

    If the RM doesn’t answer, voicemail directs the client to email the RM through the app. That email is automatically copied to an internal support team that can respond if needed.

    You also clarify that this feature will be shown only to the one million high-net-worth clients, not to all banking customers (more than 10 million).

    Once the designer walks through the wireframe, you open the floor for questions.

    You’re going to get a lot of feedback and questions from this group.

    Let me show you what might come up.

    Compliance might say:

    “We need a pop-up message warning high-net-worth clients not to share confidential information since the backup team will also be copied on emails.”

    Imagine getting that feedback after the final designs are complete. You’d be redesigning everything.

    This is exactly why you’re reviewing wireframes instead of final designs.

    But compliance feedback is just the beginning.

    Leadership might ask for more features

    Stakeholders will suggest additional features. For example, wealth leadership might want a chat feature in addition to call and email.

    This is where you push back.

    Keep version one simple. Be ready to push back on requests that make your wireframe more complex.

    Because here’s what happens if you don’t.

    Someone suggests chat. Then video calls. Then appointment booking. Before you know it, you’re building a full communication platform instead of solving the original problem.

    This is how feature creep happens.

    So how do you push back without offending leadership?

    First, acknowledge the feedback. Then provide data to support your recommendation.

    You might say: 

    “That’s great feedback. The data currently shows that email and call are the two most used options by our high-net-worth clients to contact their relationship managers. I recommend we first test these two options, get feedback, and then run a survey to see if clients need a chat option. That way we’re taking a data-driven approach.”

    Or, the technology team might chime in and push back on this request themselves.

    Someone from the technology team might say: 

    “We can implement the chat option, but it will take significantly more time. We use a third-party chat platform like Intercom, and integrating that into the mobile app requires custom API work and security reviews. That could add several months to the timeline.”

    You’ll also get other questions.

    Answer them. If stakeholders have concerns that need deeper discussion, address them separately.

    But here’s the question you’ll definitely get:

    How soon can we launch this feature?

    Here’s what you say: 

    “We were waiting for approval on the low-fi wireframe. Now that we have alignment, the next step is for technology to provide timeline estimates. I’ll coordinate with them and share final timelines.”

    You need to be the single point of contact.

    Because you don’t want leadership to hear different estimates from multiple teams. You coordinate the timelines. You communicate them to leadership.

    Meanwhile, you set the next steps with technology.

    You ask the software architect to start working on the architecture diagram.

    At this point, some technology teams might say they need to run a quick proof of concept before they can estimate timelines. And they might need a week or two max.

    Remember, these timelines are estimates. They’re not set in stone.

    Technology isn’t the only team with next steps.

    What dependency teams need to do

    Ask the dependency teams, especially marketing and client support, to share their distribution plan.

    How will this feature be distributed? How will it be marketed?

    As a product manager, it’s your responsibility to ensure the feature’s success. This includes how it’s distributed, even though marketing typically executes the tactics. You need to be aware of the marketing strategy.

    For example, they might promote this feature in bank branches. They might run an online campaign announcing the new feature for high-net-worth clients. They might send targeted emails to relationship managers explaining how to guide clients to use it.

    Once you’ve covered technology timelines and dependency team plans, you wrap up.

    Wrapping up the meeting and next steps 

    You close the meeting by outlining what happens next:

    “We’ll share the final designs and timelines after coordinating with the technology teams. Marketing and client support will present their distribution plan in a separate session.”

    With alignment secured, your designer turns the wireframe into final designs.

    Once complete, you send them to leadership for approval.

    At this stage, leadership shouldn’t request major changes because all you’re doing is adding UI and copy to the wireframe.

    The UI is already defined in a design system, a library of reusable components, colors, fonts, and spacing rules that keep the app looking consistent. Leadership can’t give feedback on UI elements because they’re standardized across the entire app.

    The only feedback should be on the copy. If you get copy feedback, your content writer makes the changes. Then you share the updated design with leadership once again for approval.

    That completes Piece 3 of the fintech puzzle, The Solution Design.

    You update the Product Blueprint by attaching the approved final designs to The Solution Design section. The blueprint remains your single source of truth, capturing all five pieces of the puzzle and keeping everyone aligned.

    You’re ready for Piece 4, The Solution Build.

    But there’s something leadership will want to know first. When will this feature go live?

    As we saw earlier in this article, you’ll communicate timelines as a part of The Solution Build. And to give leadership accurate timelines, you need to understand how enterprise fintech software actually gets built and released.

    Once you understand this, you can communicate realistic timelines to leadership.

  • How to Validate If Your Low-Fi Wireframe Can Actually Be Built

    How to Validate If Your Low-Fi Wireframe Can Actually Be Built

    Low - fi wireframe_Fintech

    You now understand how a mobile app screen gets built.

    But before you present that wireframe to leadership, there’s one question you must answer:

    Can it actually be built?

    Here’s what happens if you skip this step.

    You present the wireframe. Leadership loves it. They approve it. You get the green light.

    Then you take it to engineering.

    And that’s when you discover call routing will take a year to build.

    Or the data you need doesn’t exist.

    Or three teams have conflicting dependencies.

    Now you’re explaining to leadership why something that was approved suddenly can’t launch on time.

    This is why you validate the wireframe with technology stakeholders first.

    The goal of this call is simple: identify constraints and dependencies.

    Let’s look at how to run this meeting.

    Who should be on this call?

    It’s the same group of technology stakeholders who attended the Product Blueprint Review meeting.

    So, you’ll have managers from:

    • Mobile app engineering (iOS and Android)
    • Software architecture
    • Database engineering
    • Data engineering
    • Backend engineering (microservices)
    • The call center technical team (IVR), since call routing is involved

    If anyone wasn’t at the previous meeting, give them enough context separately on a one-on-one call so they’re aligned before attending this session.

    You might wonder why managers are involved, and not your core team that will actually build it.

    Because your job as a product manager is to protect your core team’s time and help them work more efficiently.

    Individual contributors should be building. Managers can represent their teams, surface constraints early, and prevent rework later.

    You’ll work closely with your core developers, engineers, and designers after this validation step. We’ll talk more about team structure in an upcoming post.

    How to structure the call

    Start by setting expectations.

    Tell everyone exactly what you’re trying to accomplish:

    “We’re going to walk through the low-fi wireframe and identify constraints and dependencies. That’s the goal of this call.”

    Simple. Clear.

    Now you’re ready to walk through the wireframe. 

    Let the designer lead the walkthrough

    Don’t walk through the wireframe yourself. Ask your designer to do it.

    Why?

    Because they’re closer to the design. They understand the interactions. They can explain why elements are placed where they are.

    In our case, the designer also walks through both call-routing options.

    1) Option A: RM call forwarding (best experience)

    Calls go directly to the RM when they’re available. When they’re not, calls are routed to an internal support queue. From the client’s point of view, nothing really changes. It’s still one number. One simple flow.

    2) Option B: Call-to-message fallback (simpler)

    If the RM doesn’t answer, voicemail directs the client to email the RM through the app. That email is automatically copied to an internal support team that can respond if needed.

    You step in only when context is missing.

    For example, if they forget to mention that this feature is only for the one million high-net-worth clients (not all ten million banking customers), you clarify that.

    Once the walkthrough is complete, open the floor for questions.

    Questions fall into two buckets: design questions and business questions.

    Design questions sound like this: 

    “What happens when the user clicks the call button? Where does it take them?”

    The designer answers: 

    “It opens the phone’s native dialer. Same for email. It opens the default email app, and the backup team is automatically copied.”

    Business questions sound like this: 

    “What happens if a relationship manager’s photo isn’t in the database?”

    You answer: 

    “In an ideal world, every RM has a photo. But if one doesn’t exist, we show a default icon instead.” The designer can show that fallback icon.

    Answer the questions. Then move forward.

    Identifying constraints

    You’ve explained the wireframe. You’ve answered questions. 

    Now it’s time to hear from engineering.

    You ask: 

    “What constraints do we have? What would prevent us from building this?”

    And then you listen.

    Here’s what might happen.

    The mobile app engineering manager asks:

    “Do we have a way to identify high-net-worth clients?”

    The architect responds:

    “We have a ‘net_worth’ flag in the client database to identify these clients in the database. We’re not sending it to the mobile app today, but that’s not a big effort. We can expose it.”

    Mobile team says:

    “That works.”

    Good. No constraint there.

    Then the call routing team speaks up:

    “Option A isn’t possible right now. It would take us about a year to build that capability. We can add it to the roadmap, but it’s long-term.”

    You ask:

    “What about Option B?”

    They respond:

    “Option B is possible. Private banker calls already go through our system. We just update the voicemail message. That’s a configuration change.”

    There’s your constraint.

    Option A = long-term investment.
    Option B = feasible now.

    Once you’ve identified constraints, move to dependencies.

    Ask each team: 

    “What do you need from other teams before you can start building?”

    Yes, everyone depends on the architecture diagram. That’s assumed.

    But what else?

    • Backend engineers and mobile app developers depend on the API contract. Without it, development can’t begin.
    • Mobile app developers might depend on IVR routing codes.
    • Backend engineers might depend on data engineers exposing specific fields.

    Ask each team to call out their dependencies explicitly. This way, there are no surprises later in the development process.

    Once constraints and dependencies are clear, explain the next step.

    Tell your stakeholders:

    “We’ll now schedule a meeting with leadership to get approval on the low-fi wireframe. I’ll be adding all of you to that call.”

    In our case, we identified a blocker. Option A would take a year.

    So we present both options:

    • Option A: Best experience, long timeline, high investment
    • Option B: Feasible, faster, lower effort

    Leadership decides which direction to take.

    In the next post, we’ll walk through how to run that leadership meeting.

  • How A Fintech App Screen Actually Gets Built

    How A Fintech App Screen Actually Gets Built

    In the previous post, we gave our designers the context they needed to create a low-fidelity wireframe.

    Here’s the wireframe they built.

    Low - fi wireframe_Fintech

    It looks simple, right?

    An image of a relationship manager (RM). Their name. Their title. And options to email or call.

    But it’s just one screen. And showing this screen to one million high-net-worth clients means a lot has to happen behind the scenes.

    Multiple technology teams are involved in making this work.

    Before we walk this wireframe with our technology stakeholders, we need to understand who those teams are and what they actually do.

    Let’s start with the first team.

    Mobile app developers

    They are also called client-side or front-end app developers.

    They turn the low-fi wireframe into a real screen inside the banking app. For this, they use programming languages such as Swift for iPhones and Kotlin for Android phones.

    To build the screen, they need the RM’s image, name, and title. All of that data lives in databases.

    A database is a place to store data.

    Think of it like an Excel file. Each sheet is a table. Each row is a record. Each column is a field.

    In our case, one table may store the relationship manager’s name, another may store their title, and another may store their image.

    Database engineers ensure this data is accurate, up to date, and stored consistently.

    If a relationship manager leaves and a new one joins, these engineers ensure the records are updated correctly across the databases.

    But in large enterprise financial services firms, the data is spread across multiple databases. Some in the cloud and some in databases owned by the bank.

    The cloud is a collection of remote servers and services owned by companies like Amazon (AWS) or Microsoft (Azure). Banks rent this computing power and storage instead of running their own servers in-house. This is mainly done to save cost and make it easier to add more storage or computing power as the bank grows.

    Most modern fintechs have their entire data on the cloud, but that’s not the case for banks. Some banks are hundreds of years old, so they have their own databases that are managed entirely by the bank.

    At this point, we need someone to pull all this data from different places and make it usable for the app.

    That’s what data engineers do.

    Their job is to combine data from multiple places, organize it in a format that the mobile app developers can use, and most importantly, they have to automate this process so it keeps running without manual work.

    Let me explain what “keeps running” means.

    In our case, relationship managers join, leave, change titles, or update their photos. When any of this happens, the mobile app needs to show these updates immediately. Data engineers set up automated processes that check for updates every few minutes (or even seconds) and pull the latest data from all the different databases.

    This automated setup is what’s called a data pipeline. 

    Once the data is stitched together and organized, data engineers store it in what’s called a data warehouse or a clean database. Think of this as a well-organized filing cabinet where all the relationship manager information is kept in one place, ready to be used.

    Backend engineers (who we’ll talk about next) then build the microservices and APIs that the mobile app talks to.

    So what is an API?

    An API is how the mobile app requests data and gets a response.

    Think of an API like a waiter at a restaurant.

    You (the mobile app) sit at a table and want food (data). You don’t go into the kitchen and cook it yourself. Instead, you tell the waiter what you want. The waiter goes to the kitchen (the data warehouse), gets your food, and brings it back to you in a nice, organized way.

    In our feature, the mobile app asks the API, “Give me the relationship manager’s name, title, and image for this client.” The API collects that information and sends it back to the mobile app in a structured format.

    These APIs are exposed by microservices.

    A microservice contains the actual logic and rules. An API is simply how the mobile app talks to it.

    Think of a microservice as a small, specialized department in the bank. The API is the door the app uses to ask for data.

    This setup exists mainly for two reasons.

    Safety and security

    If the mobile app talked directly to the database, anyone who figured out how the app works could potentially access all the data in the database.

    In our feature, a hacker could try to request not just their own relationship manager’s information, but information about other clients’ relationship managers, or even client account details.

    Microservices act as a security checkpoint. When the mobile app asks for the RM’s information, the microservice first checks: Is this user logged in? Are they allowed to see this specific RM’s information? Only after passing these checks does the microservice retrieve the data and return it to the app.

    Scalability

    Scalability means handling more users without the system breaking.

    Imagine 10,000 clients opening the mobile app at the same time, all requesting their relationship manager’s information.

    If the mobile app talked directly to the data warehouse, too many requests would hit it at once and slow it down.

    Microservices sit in between and manage this traffic. They can temporarily store frequently requested RM details so they don’t call the data warehouse every single time. This protects the data warehouse from being overloaded.

    For our feature, multiple microservices might be involved:

    • One microservice for identifying if the logged-in customer is a high-net-worth client
    • Another for retrieving the relationship manager’s information from the data warehouse

    Backend engineers (also called microservices engineers) might build new microservices or modify existing ones to fetch data from the data warehouse.

    They apply checks and rules, like the traffic controls you saw above, before returning data to the mobile app.

    Software architects

    You might be wondering who decides which databases to use, which microservices to build, and how data flows between teams.

    That’s the role of software architects.

    They look at everything we’ve discussed so far and tie it together. They decide which databases are needed, which microservices to use, and what APIs the backend engineers need to build.

    They also define how the mobile app will consume that data. This definition is called the API contract.

    What does an API contract look like?

    An API contract is written in JSON. It defines exactly what data the mobile app will receive.

    Here’s an example for our feature:

    {

      “relationshipManager”: {

        “name”: “Jane”,

        “title”: “Senior Relationship Manager”,

        “imageUrl”: “https://bank.com/images/rm/Jane-mitchell.jpg”,

        “email”: “[email protected]”,

        “phone”: “+1-555-0123”,

        “availability”: “available”

      }

    }

    This contract tells mobile app developers exactly which fields (name, title, imageUrl, etc.) they’ll receive and how that data will be structured.

    The contract stays stable even when things change behind the scenes. Today, the RM’s name might come from one database. Tomorrow, it might come from three different systems stitched together. As long as the API returns the same fields, the mobile app doesn’t break.

    Database engineers, data engineers, backend engineers, and mobile app developers all rely on the architecture diagram built by software architects to understand what they need to build and how their work fits together.

    Without this diagram, teams can’t confidently start their work because dependencies and responsibilities are unclear.

    How it all comes together

    What started as a simple low-fi wireframe turns out to be anything but simple.

    An image. A name. A title. An email and call option.

    Behind that one screen sits a chain of decisions, systems, and teams working together.

    Database engineers keep the underlying data accurate. Data engineers pull that data together and keep it fresh. Backend engineers build microservices and APIs to control how that data flows. Software architects define the blueprint that holds everything together. Mobile app developers turn all of that into a screen the client can tap.

    When you understand this flow, walking a wireframe with technology stakeholders becomes very different. You’re no longer just talking about screens. You’re talking about data sources, contracts, dependencies, and sequencing.

    And that’s exactly where the next step begins.

    In the next post, we’ll walk through how to run a technical feasibility review using this wireframe and this understanding to turn design intent into something that can actually be built.

  • How To Turn Your Customer Problems Into Low-Fidelity Wireframes

    How To Turn Your Customer Problems Into Low-Fidelity Wireframes

    You’ve got leadership buy-in on your Product Blueprint.

    With that alignment in place, you’re ready to move into the third piece of the puzzle. This is where The Solution Design begins.

    The focus now shifts from understanding the problem to designing a solution that teams can align on.

    The most effective way to start this phase is with low-fidelity wireframes.

    In this post, I’ll walk through what low-fi wireframes are, why they matter, and how to create them with your design team.

    What is a low-fidelity wireframe?

    A low-fidelity wireframe (often called a low-fi wireframe) is a rough sketch of a website or app interface.

    It’s a simplified version of the final product that focuses on functionality rather than visual design.

    Low-fi wireframes include only the basic design elements: boxes for images, lines for text, and simple shapes for buttons.

    Here’s an example of a low-fi wireframe:

    Low fi wireframe

    You wouldn’t build a house without blueprints, right? Low-fi wireframes are your blueprints.

    Why do low-fi wireframes matter? 

    Low-fi wireframes help you communicate your solution clearly before investing in final designs.

    They help in three ways.

    1) Communicating with engineering

     Wireframes help you understand technical feasibility early.

    Before leadership sees final designs, you want to know whether the solution is even possible to build. Engineers can spot technical constraints quickly when reviewing wireframes.

    2) Communicating with leadership

    Wireframes make alignment easier.

    Before leadership sees final designs, you need to know if the solution can be built. Engineers spot technical constraints quickly when reviewing wireframes.

    3) Communicating with designers

    Wireframes make it easier to iterate.

    They’re quick to edit, which allows you to test ideas and change direction before committing to final designs.

    Understand your app’s structure and user flows

    Before working with designers on wireframes, you need to understand how your app is structured and how users navigate it.

    You need to know where the feature you’re building will appear and how it affects existing flows.

    We’re building a feature that allows private banking clients to contact their relationship managers. Most of their banking interactions happen through their relationship managers, so the feature needs to be visible as soon as clients open the app.

    Based on the data, HNW clients primarily use the mobile app over desktop. So we’re building this feature for mobile first.

    On mobile, that means putting the feature on the main screen where it’s immediately visible. You get there by mapping the core flows and understanding where clients start their journeys.

    Without a clear understanding of your app’s structure and flows, it’s hard to know where a feature fits.

    Before you start wireframing, map out the critical flows of your product.

    Ask yourself:

    • Where is the entry point for this feature?
    • How does it fit into existing screens?
    • Does it replace something, or add to what’s already there?

    Don’t worry about technical implementation at this stage. Your goal is to understand which screens and flows need to include the feature.

    How to design low-fi wireframes with your design team

    Step 1: Provide context with the Product Blueprint

    Walk your designers through the Product Blueprint.

    This gives them full context. They understand who they’re designing for and which customer problems the solution needs to address.

    Step 2: Set expectations on fidelity

    Tell them this is a low-fi wireframe. It doesn’t need to be detailed.

    Step 3: Explain where the feature lives in the app

    Show designers where the entry point for the feature will be.

    In our case, clients want to contact their relationship manager as soon as they open the app. That places the feature on the main screen, not three levels deep.

    Be specific. Show them the existing flow and explain how the feature fits into it.

    Step 4: Translate problems into design requirements

    Once you’ve shared the blueprint and explained the context, break the problems down into design requirements.

    Here are the two most critical problems:

    • Unable to reach the relationship manager when they’re away
    • No visible or empowered backup ownership

    We identified three problems earlier, but these two matter most. They prevent clients from reaching their relationship manager when it matters, which directly drives churn.

    Trying to solve everything at once leads to feature creep. It’s better to focus on the core problems first, before adding options that complicate the experience.

    With that prioritization in place, we can translate each problem into design requirements.

    Problem 1: Unable to reach the relationship manager when they’re away

    The wireframe should include:

    • A clear option to call or email the relationship manager directly from the app
    • An indicator showing whether the relationship manager is available or away
    • A photo of the relationship manager so clients know who they’re contacting

    Once the problem is translated into design speak, move on to the next one.

    Problem 2: No visible or empowered backup ownership

    Relationship managers in private banking are supported by an internal support team. This team takes work off the RM’s plate by triaging requests, preparing documentation, and managing follow-ups, so the RM can stay focused on client relationships and advisory work.

    From the client’s perspective, this support should remain invisible.

    High-net-worth clients expect a single owner for their financial relationship. Exposing internal support teams introduces multiple points of contact and breaks that sense of ownership.

    That principle should guide the design.

    Clients are already calling and emailing their relationship managers. The app needs to support both options and ensure client requests don’t stall when the RM is unavailable.

    For email, messages sent through the app should copy the internal support team, so client requests don’t stall when the RM is away.

    Calls need a similar fallback. There are two practical options.

    1) Option A: RM call forwarding (best experience)

    Calls go directly to the RM when they’re available. When they’re not, calls are routed to an internal support queue.

    From the client’s point of view, nothing really changes. It’s still one number and one simple flow.

    2) Option B: Call-to-message fallback (simpler)

    If the RM doesn’t answer, voicemail directs the client to email the RM through the app. That email is automatically copied to an internal support team that can respond if needed.

    In both cases, the client continues to contact the RM. The internal support team remains invisible. 

    From problems to wireframes

    You’ve walked your designers through the Product Blueprint. You’ve set expectations that the wireframes can be simple. You’ve shown where the feature fits in the app and broken the problems down into clear design requirements.

    Now designers can start wireframing.

    The wireframes will focus on the two problems driving churn—showing how clients can contact their relationship manager when it matters, while keeping the internal support team invisible.

    Once the wireframes are ready, share them with engineering managers and architects. They’ll confirm whether the proposed flows can be built and identify any technical constraints. This review happens before the wireframes go to leadership.


    Next Article: How a Fintech App Screen Actually Gets Built

  • The Product Blueprint Review: How To Get Leadership Buy-in

    The Product Blueprint Review: How To Get Leadership Buy-in

    At this stage, you’ve documented your Product Blueprint.

    You’ve clarified the target audience and the problem. You’ve seen that retired business owners account for 70% of the churn. You’ve prioritized the key issues faced by this audience.

    The hard part comes next.

    Most product work doesn’t fail because the problem is wrong. It fails because leadership never fully aligns on the problem.

    That gap shows up later as new objections to the problem or requests to move in a different direction. When that happens, weeks of design and development work get thrown away.

    This erodes trust. Designers and developers lose confidence when their work gets scrapped because of late-stage misalignment.

    So how do you avoid that?

    By getting leadership aligned on the problems you’re trying to solve before you move on to design.

    That alignment happens in the Product Blueprint review meeting.

    What’s a Product Blueprint review?

    It’s a meeting where leadership reviews your Product Blueprint.

    All you’re trying to do is get a green light on the prioritized problems. Once you have that, you can confidently move to the design phase.

    By getting a green light early, you prevent misalignment later.

    But here’s the thing.

    Without structure, these meetings drift. 

    Discussions drift into solutions before validating the problem. Leaders question assumptions you thought were settled. Engineering jumps into technical debates too early.

    So you need to set expectations upfront and run a focused meeting.

    Before that, let’s start with who should be in the room.

    Who should be in the Product Blueprint review?

    In fintech and financial services companies, I’ve seen the same team set up show up again and again. Especially in domains like wealth, payments, and lending.

    Three groups need to be in the room: Business, Product, and Dependency teams.

    • Business brings domain knowledge. They also fund the entire project. 
    • Product brings the product management lens. They lead everything from discovery to design to development. The group includes product, engineering, and design leadership.
    • Dependency teams like marketing and compliance need context early. Without it, they often deprioritize projects.

    Let’s break down each one.

    Business leadership

    This group represents the business line where the problem lives.

    In a high-net-worth context, this might include the VP of Wealth or the VP of HNW Banking. Along with leaders who represent relationship managers.

    These leaders understand how clients generate value for the business. They’re accountable for revenue and retention and decide whether a problem is worth solving.

    Their role in the meeting is to validate whether the problem is real, significant, and aligned with business priorities.

    Product leadership

    This group applies a product management approach to solve the business problem in a structured way. 

    Their responsibility isn’t just to build products. It’s to turn business problems into clear products that drive results.

    Leaders from product, design, and engineering should be represented here. Typical roles include AVP of Product, AVP of Design, and AVP of Engineering.

    Engineering managers and the architect are also invited to listen in. They’re informed in advance that this isn’t a technical discussion and that the goal is to provide context. A separate session will be scheduled later to discuss technical implementation.

    Dependency teams

    Dependency teams vary by organization. Marketing and compliance are common examples, but the exact mix depends on how your company operates. 

    In our case (a B2C bank), we included marketing and compliance. In a B2B setup, you might also involve sales and customer support. 

    The key is to identify the teams that support product delivery.

    Here’s what happens when you don’t bring them in early.

    These teams often deprioritize initiatives when they don’t understand the value being created. From their perspective, multiple initiatives compete for their limited capacity. And projects that affect fewer users are easy to push aside.

    This is where context changes everything.

    Let’s say your bank has 10 million retail customers. A feature for 1 million high-net-worth clients may sound small in comparison but the revenue impact is significantly higher.

    When dependency teams understand that context early, they prioritize your feature.

    Marketing needs this context to plan promotion across digital channels and physical branches. Compliance needs it to engage early and assess regulatory risk with a clear view of business impact.

    Bringing these teams in early builds shared understanding and reduces the risk of late-stage pushback or deprioritization.

    Now that you know who should be in the room, let’s talk about how to structure the meeting.

    How to structure the meeting

    Without a clear structure, this meeting will quickly drift away from your goal and slide into long hypothetical discussions that don’t lead to decisions.

    You need to keep the audience engaged and encourage interaction, while preventing the conversation from getting derailed. That starts with setting expectations upfront.

    Start with objectives and rules of engagement

    Share an agenda of what you will cover and set expectations of what you need from them.

    Meeting Objectives

    1. Walk through the Product Blueprint (target audience and problem)
    2. Get a green light on the prioritized problems 
    3. Answer questions from leadership 

    Rules of Engagement

    • Respect the order of the material and avoid jumping ahead
    • There will be time for questions and feedback at the end of the presentation
    • Leaders should focus feedback on their areas of expertise to keep the discussion productive
    • For engineering stakeholders, this meeting provides context only. A separate session will be scheduled for technical discussions
    • When giving feedback, clarify whether it is a blocker, a comment, or a suggestion

    I usually set expectations with engineering stakeholders before the meeting to avoid drifting into technical discussions. 

    This structure keeps the conversation moving, directs feedback to where it’s most useful, and still leaves room for leadership to raise important concerns.

    Deep dive into the Product Blueprint

    The next step is to deep dive into the Product Blueprint, get buy-in, and align on next steps.

    We covered how to build this document in the previous post. In this meeting, you walk through it section by section.

    The business problem

    Start with the high-level context. Call out the 10% quarter-over-quarter churn and the 100,000 clients leaving.

    The Target Audience

    Explain who is churning, how you segmented clients and why retired business owners account for 70% of the churn. 

    The Customer Problem

    Walk through Dave’s journey. Show how retired business owners became snowbirds, how their banking behavior changed, and how they struggled to reach their relationship managers. 

    This is where you should spend the most time. That’s because leadership is often far removed from customers’ day-to-day realities and your job is to bridge that gap.

    The prioritized problems  

    Present the three key issues you identified:

    • Unable to reach the relationship manager when they’re away
    • No visible or empowered backup ownership
    • Clients not identified as high-net-worth when calling from US numbers

    Next steps after the Product Blueprint meeting

    Once you’ve walked through the blueprint and answered questions, there are two possible outcomes.

    If leadership is aligned and you receive a green light, you can move forward with confidence. The next steps are to wireframe solutions using low-fidelity mockups and then present those wireframes to the same group in a follow-up meeting.

    If there are open questions or action items, take them away and follow up with the relevant stakeholders. Once those are resolved, you can proceed.

    With a clear approval in place, the next two steps are:

    • Create low-fidelity mockups
      Simple wireframes that explore solutions to the problems you’ve identified.
    • Technical feasibility review
      After the mockups are ready, set up a session with engineering managers and architects to identify technical constraints before presenting them to leadership.

    The Product Blueprint review meeting is where clarity turns into alignment. 

    By the end of the meeting, everyone understands who the product is for, which problems matter most, and why they deserve attention now.

    With a green light in place, the path forward is clear. 

    You can begin designing low-fidelity mockups to explore solutions and then work with engineering to understand the technical constraints involved.

    In the next post, I’ll walk through how to create low-fidelity mockups that bring your solution to life.

Twenty Twenty-Five

Designed with WordPress