meet the humans who built this →

Frontend Team · UI & Interactions
F

Frontend Team · UI & Interactions

Backend Team · APIs & Infrastructure
B

Backend Team · APIs & Infrastructure

Design Team · UX & Visual Design
D

Design Team · UX & Visual Design

DevOps Team · CI/CD & Deployment
D

DevOps Team · CI/CD & Deployment

QA Team · Testing & Quality
Q

QA Team · Testing & Quality

Cricket Winner

Cricket Winner

this crew shipped Cricket Winner in 195 weeks

 Specific Blog Page website design - Xenotix labs
Articles website design - Xenotix labs
ICC MEN T20  - Xenotix labs
IPL2026 website design- Xenotix labs
Live Score website design - Xenotix labs
Player page website design - Xenotix labs
Blogs page website design - Xenotix labs
 Blogs page website design - Xenotix labs

The Brief.

Built by Xenotix Labs — Real-Time Cricket Intelligence Platform with Live Scores, Breaking News & Opinion Trading

Cricket Winner is one of the most ambitious projects we've delivered at Xenotix Labs — a real-time cricket intelligence platform purpose-built for the world's most cricket-obsessed nation: India. It is not just another live-score app, and it is not just a news aggregator. It is the convergence of three high-stakes product categories rolled into one cohesive experience: real-time live cricket scoring, minute-by-minute curated cricket journalism, and a sophisticated opinion trading engine that lets fans put their cricket instincts to financial test in real time.

When the founders of Cricket Winner approached us, their vision was crystal clear but operationally brutal: they wanted to build a platform where a cricket fan watching an India vs Australia match could simultaneously see the score updating ball-by-ball, read breaking news the moment a player got injured, and place an opinion trade on whether Virat Kohli would score a half-century — all without ever leaving the app, all without lag, and all at internet-scale during India's peak cricket viewership moments when traffic spikes can be ten or twenty times the baseline. That last constraint is the one that kept our architects up at night, because anyone who has ever tried to build a real-time sports platform knows that the difference between a great match-day experience and a catastrophic one is measured in milliseconds.

The platform we built spans three primary surfaces: a Flutter-based mobile application available on both iOS and Android, a Next.js-powered web platform optimized for desktop fans and SEO-driven content discovery, and a robust web admin dashboard that the Cricket Winner editorial and operations team uses to manage news, configure matches, monitor opinion trading markets, and orchestrate the platform end-to-end. Behind the scenes, the engine room runs on a microservices architecture powered by Node.js, MongoDB for high-volume read-heavy data, WebSockets for sub-second live score synchronization, and Apache Kafka for high-throughput event streaming across news ingestion, trading events, notification dispatch, and analytics pipelines. The entire stack is deployed on AWS with multi-region failover, autoscaling groups, and a CDN-fronted media layer for low-latency content delivery anywhere in the country.

Over the course of five intense months, our team of nine — two designers, four developers across mobile and backend, one DevOps engineer, a dedicated QA lead, and a project manager who held the whole orchestra together — took Cricket Winner from a Figma wireframe sketch to a production system handling thousands of concurrent users on match days. We onboarded the team to the codebase, set up CI/CD pipelines from day one, designed seven core user personas, mapped twenty-three critical user journeys, ran three full beta testing cycles, and shipped the production app to both app stores with zero rejection rounds. The result is a platform that doesn't just compete in the Indian sports app market — it sets a new standard for how real-time sports experiences should feel.

This case study is a deep look into how we built it. It is a transparent walkthrough of every architectural decision, every tradeoff, every late-night problem-solving session, and every engineering choice that went into Cricket Winner. If you are a founder thinking about building a real-time platform, a product leader evaluating tech partners, or simply curious about what it takes to build sports tech at scale in India, this case study is written for you.

CLIENT

Company

Winner Media Sports

Industry

Other

Location

Dubai, UAE

Type

App, Website, Admin Dashboard, UI/UX Design

 Specific Blog Page website design - Xenotix labs
Articles website design - Xenotix labs
ICC MEN T20  - Xenotix labs
IPL2026 website design- Xenotix labs

the build log →

Day 07

The Client & Their Vision

The founders behind Cricket Winner are a duo of seasoned operators — one with a decade of experience in Indian fintech and the other with a deep background in sports media. They came to Xenotix Labs not because they needed a development shop, but because they needed a product partner who understood that cricket in India is not a sport. It is a religion, a cultural backbone, a national mood regulator, and one of the most lucrative attention markets on the planet. Any platform that aspires to serve the Indian cricket fan must respect that intensity, and any technology stack that powers such a platform must be built to absorb it.

Their original pitch deck described Cricket Winner in three lines: "We want to build a platform where every ball matters and every opinion has value. Cricket fans should not just consume the game — they should participate in it. And the experience must be instant, intelligent, and indispensable." That single paragraph captured what would become our north star for the next five months. Every product decision, every design tradeoff, every line of backend code was filtered through those three words: instant, intelligent, indispensable.

In our first discovery workshop, we mapped out the founders' assumptions about the Indian cricket market and pressure-tested each one. They believed, correctly, that the existing live score apps in India were functionally adequate but emotionally hollow — fast enough to load a score, but slow at telling you why that score mattered. They believed the news consumption experience for cricket fans was fragmented across a dozen websites, each with intrusive ads, slow load times, and zero personalization. And they believed, most ambitiously, that the cricket fan was ready for a new category of engagement entirely: opinion trading, where the same instincts that make fans shout at the television could be channeled into structured, regulated micro-markets where users buy and sell positions on cricket outcomes in real time.

The opinion trading thesis was the most exciting and the most technically demanding part of the brief. Globally, opinion trading platforms in sports and politics — companies like Polymarket and Kalshi — have shown there is a robust appetite for prediction-based engagement. In India, the regulatory environment around real-money skill-based platforms is mature, and the cricket vertical specifically has shown breakthrough product-market fit through the rise of fantasy sports apps. Cricket Winner's founders saw an open lane: take the engagement mechanics of opinion trading, marry them to the immediacy of live-score apps, layer them on top of high-quality cricket journalism, and create a platform where a single fan's match-day experience moves seamlessly across watching, reading, and trading.

Our job was to translate this vision into a product. That meant taking a multi-page strategic brief and converting it into seven user personas, twenty-three user journeys, and a feature backlog of over one hundred and forty distinct user-facing capabilities. It meant facilitating tough conversations about feature prioritization for the MVP, because building everything at once would have meant shipping nothing well. It meant pushing back on certain ideas — like a complex social feed feature in V1 — and championing others — like investing aggressively in the live-score infrastructure even if it meant cutting other surface-level features. The founders trusted us to be product partners, not order-takers, and that trust is what made the project sing.

image.png
I

image.png

Day 14

Problem Statement & Challenges

Before any code was written, before any pixels were pushed in Figma, our team spent three weeks deeply embedded in problem definition. This is a phase many development agencies skip in their rush to billable hours, and it is also the phase that most often determines whether a product succeeds or quietly dies six months after launch. For Cricket Winner, we identified eleven distinct problem clusters that the platform needed to solve, and we will walk through them in the order of strategic importance.

The first and most critical problem was real-time score latency. Existing cricket apps in India advertise themselves as "live" but in practice, scores update every twenty to forty-five seconds, sometimes longer during high-traffic moments. This is acceptable when a fan is checking a score during a meeting, but it is unacceptable for a platform that wants fans to engage with the game in real time. If our opinion trading engine showed a price for "Will the next ball be a six?" but the underlying score data was thirty seconds delayed, the entire experience would collapse. We needed to engineer a system where score updates from the official data feed reached the user's device in under one second consistently, even during peak load.

The second problem was news velocity and quality. The Indian cricket news landscape is dominated by a handful of large media houses whose web experiences are weighed down by aggressive advertising, slow page loads, and low-quality content farming during slow news cycles. We needed to build an editorial CMS that allowed Cricket Winner's in-house journalism team to publish high-quality cricket news with minute-by-minute granularity during active matches, while also maintaining a steady drumbeat of long-form analysis, player profiles, and historical content. The CMS had to support not just text but rich media — images, videos, infographics, social media embeds — and it had to push notifications to users intelligently without becoming spammy.

The third problem, and the most architecturally complex, was the opinion trading engine. Building a trading engine — even one operating on opinion markets rather than securities — is fundamentally a financial systems problem. It requires a matching engine to pair buyers and sellers, a settlement layer to credit and debit user wallets atomically, an audit trail that survives database failures, a fraud detection pipeline, and integration with payment gateways for deposits and withdrawals. All of this had to be built with the assumption that during a high-stakes moment — say, the final over of an India-Pakistan match — thousands of users might place trades simultaneously. The system had to clear those trades fairly, transparently, and without losing a single transaction.

The fourth problem was scaling for cricket-specific traffic patterns. Unlike most consumer apps where traffic follows a predictable daily curve, cricket apps experience extreme spike-load patterns. A normal Tuesday afternoon might see baseline traffic, but at 7:30 PM on a Friday when an IPL match begins, traffic can multiply by twenty in a span of minutes. Then, when the match ends, traffic drops just as fast. Engineering a system that can handle these spikes without overprovisioning compute (which would burn money during off-hours) and without underprovisioning (which would crash the platform during the moments that matter most) required careful capacity planning, autoscaling logic, queue-based load shedding, and aggressive caching strategies.

The fifth problem was unifying multi-platform experience. Cricket Winner had to feel coherent whether the user was on an iPhone, an Android phone with three years of accumulated bloatware, a desktop browser, or a tablet. The mobile app and web app had to share state — if a user placed a trade on their phone, opened their laptop, and refreshed the dashboard, the trade had to be visible immediately. This required us to think carefully about API design, session management, and real-time state synchronization across devices.

The sixth problem was content moderation and safety. Any platform with user-generated elements — and even a comment section qualifies — needs strong moderation infrastructure. Cricket fandom in India occasionally veers into communal tension, especially around India-Pakistan matches, and we built moderation tooling from day one rather than as an afterthought.

The seventh problem was localization and accessibility. While the V1 launched in English, the team had ambitions to support Hindi, Tamil, Telugu, Bengali, and Marathi within the first year. The data layer, design system, and frontend frameworks all had to be architected with localization in mind from day one.

The eighth was push notification strategy. Push notifications are simultaneously the most powerful retention tool in mobile and the most easily abused. We had to design a notification system that delivered critical match moments — wickets, sixes, milestones — without overwhelming users with noise.

The ninth was payment integration and KYC compliance for the trading platform. Onboarding users to a real-money product in India requires a careful KYC flow, integration with verified payment processors, compliance with applicable local regulations, and a withdrawal process that is fast enough to feel modern but secure enough to prevent fraud.

The tenth was analytics and observability. We needed deep instrumentation across every layer so the founders could see exactly what was working, what was failing, and where users were dropping off — without us having to build dashboards for every question they could ever ask.

And the eleventh, finally, was the emotional one: building a product that felt like cricket. Cricket has rhythm, tension, drama, statistical density, and tribal affiliation. A great cricket app captures all of that in its UX. It feels alive when the match is alive. It feels reverent when a record is broken. It feels electric when a wicket falls in the final over. We made it our job to engineer that feeling into every screen, every transition, every notification.

Articles website design - Xenotix labs
Day 21

Our Approach & Methodology

Once the problem space was deeply understood, we moved into our standard Xenotix Labs delivery methodology — a hybrid Agile approach that blends two-week sprints with monthly milestone gates and a strong emphasis on continuous design-development-QA collaboration rather than the waterfall handoffs that plague so many agency engagements.

Our methodology rests on five pillars. The first is design-led product thinking. Every feature we build starts in Figma, not in code. Before a single backend endpoint is implemented, our designers create high-fidelity prototypes that go through three to five revision rounds with the client. This sounds slow, and in the short term it is, but it saves enormous amounts of rework downstream. By the time a developer picks up a feature, every interaction state, every edge case, every empty state, every error state has been visualized and approved. There are no questions about what the developer is supposed to build — there is a Figma file showing exactly what to build.

The second pillar is microservices from day one, even when they're "overkill." There is a school of thought in startup engineering that says you should start with a monolith and split into microservices later when you have scale problems. That advice is sound for some projects, but it is wrong for projects like Cricket Winner where the founders had clear plans for rapid feature expansion and the cost of refactoring a monolith into services later would have been crippling. We started with seven microservices — auth, scores, news, trading, payments, notifications, and admin — each owning its own database collection and exposing well-defined APIs. This let teams work in parallel, deploy independently, and scale services individually based on traffic patterns.

The third pillar is real-time-first architecture. For Cricket Winner specifically, real-time was not an enhancement — it was the product. So we made WebSockets and Kafka first-class citizens of the architecture from day one rather than bolting them on later. Every feature we designed was reviewed against the question: "Does this feature need real-time updates, near-real-time updates, or eventual consistency?" The answer determined which infrastructure pattern we used.

The fourth pillar is observability-first development. Before we wrote production features, we wrote production logging, metrics, and tracing. Every API endpoint emits structured logs. Every Kafka event is traceable end-to-end. Every database query has a latency metric. By the time we shipped V1, we already had Grafana dashboards, alerting policies, and runbooks for the top fifteen failure modes. This is not glamorous work, but it is the difference between a system that fails gracefully at 2 AM and one that wakes up the founders.

The fifth and final pillar is QA-as-a-co-creator, not a final gate. Our QA lead joined the project in week one, not week sixteen. She was in design reviews, sprint plannings, and architecture discussions. She helped shape acceptance criteria for every story before development started. By the time a feature reached the testing stage, the test cases had already been written, the edge cases had already been catalogued, and the feature was usually built correctly the first time. This single methodology choice cuts our defect rate by an order of magnitude compared to traditional handoff-style QA.

Our sprint cadence ran on two-week cycles with a Monday planning, daily standups, mid-sprint design-tech sync, end-of-sprint demo to the client, and a retro on Friday afternoon. We used Jira for ticket management, Slack for daily communication, Figma for all design assets, and a shared Notion workspace for documentation. The client had access to all of these tools — we believe in radical transparency and have never operated with the kind of "we will show you something in two months" posture that other agencies adopt. The Cricket Winner founders could see, at any moment, exactly what we were building, what was blocked, what was shipped, and what was coming next.

image.png
I

image.png

Day 28

Design Process — Figma to Production

Design at Xenotix Labs is not a phase. It is a parallel discipline that runs alongside engineering for the entire lifetime of the project. For Cricket Winner, we structured the design process into six overlapping streams: discovery and persona work, information architecture, wireframing, visual design and design system creation, high-fidelity prototyping, and design QA during development.

The discovery phase opened with a five-day workshop where our lead designer worked directly with the founders to articulate brand voice, design principles, and emotional tone. Cricket Winner needed to feel modern but not cold, energetic but not chaotic, premium but not exclusionary. We anchored on three design principles: clarity over decoration, motion with purpose, and density done right. These principles guided every visual decision downstream. When debate arose about whether to add a flashy gradient or a subtle background pattern, we asked ourselves whether the addition served clarity or worked against it, and the answer usually wrote itself.

We then mapped seven user personas: the casual fan who checks scores during the workday, the hardcore enthusiast who watches every ball, the fantasy player who needs deep statistical breakdowns, the trader who is here primarily for the opinion markets, the news consumer who reads cricket journalism, the social fan who wants community engagement, and the lapsed fan whom the platform might reactivate through breaking news notifications. Each persona received a one-page profile, a needs and frustrations map, and a primary user journey. This persona work directly informed the information architecture — the navigation, the content hierarchy, and the feature prioritization.

The information architecture phase produced a sitemap-style document for both the mobile app and the web platform. We ran two card-sorting exercises with proxy users to validate our IA hypotheses. The mobile app settled on five primary tabs — Home, Matches, News, Trade, Profile — with deep navigation within each. The web platform, given the larger canvas and SEO considerations, used a more traditional top navigation with a focus on content discoverability.

Wireframing was done in Figma using a low-fidelity grayscale approach. We deliberately avoided color and typography decisions during wireframing because we wanted the client to focus on layout, hierarchy, and flow rather than aesthetics. Every screen went through at least two wireframe iterations before progressing to visual design. The wireframes were also annotated with interaction notes — "this card opens a bottom sheet," "this list paginates at twenty items," "this button transitions to a loading state for between two and four seconds" — so that the visual designer and the frontend engineer would have shared context.

The visual design phase produced a comprehensive design system before any individual screen was finalized. The design system included a color palette with semantic tokens (primary action color, success state, error state, warning state, info state, surface levels one through five, text levels one through four), a typography scale based on a modular ratio with six size steps, an icon library of over two hundred and forty custom-designed icons, a spacing system based on a four-pixel base unit, an elevation scale with four shadow tiers, and a component library covering buttons, inputs, cards, modals, bottom sheets, navigation patterns, and over forty other reusable elements. Every component had documented states — default, hover, focus, active, disabled, loading — and every component was built using Figma auto-layout so that downstream design work could move quickly.

High-fidelity prototyping took the wireframes and applied the design system to produce screen-by-screen Figma files for all forty-seven primary screens of the V1 product. Each screen was then linked into a Figma prototype that allowed the founders to click through the entire product as if it were live. This prototype became the single source of truth for the product. When development questions arose later, the answer was always "look at the Figma prototype." This eliminated a huge category of ambiguity that typically slows development.

Design QA during development was the final and most overlooked phase. Once developers began implementing screens, our designer reviewed every implemented screen against the Figma source, comparing pixel-by-pixel, interaction-by-interaction, animation-by-animation. Tickets were filed for every discrepancy, no matter how minor, and developers fixed them in subsequent sprints. The result is an app that looks, in production, exactly the way it looked in Figma. There is no drift. There is no "good enough" implementation. There is just the design, faithfully built.

The Design System in Detail

The Cricket Winner design system deserves a closer look because it is a substantial body of work in its own right. We treat design systems as products, not artifacts — they have versions, they have changelogs, they have documentation, they have governance, and they evolve in response to product needs.

The color system is built on a pair of foundations: a brand palette that gives Cricket Winner its identity, and a semantic palette that maps abstract roles to specific colors. The brand palette features a deep navy as the primary surface color, an electric green for primary actions, and a warm gold for accent elements that signal premium content or important actions. The semantic palette layers on top of the brand palette: success-green, warning-amber, error-red, info-blue, neutral grays in a six-step scale from near-white to near-black, and text colors at four contrast levels designed to meet WCAG AA standards on every background. Every screen draws from the semantic palette rather than the brand palette directly, which means we can theme the platform — for instance, for a future dark mode or a sponsored white-label variant — by remapping semantic tokens without touching individual screens.

The typography system is built on a single typeface, chosen for its clarity at small sizes and its character at large sizes. We use a modular type scale based on a 1.250 ratio, producing six size steps from 12 pixels to 36 pixels in our base scale, with a separate display scale for marketing and hero contexts. Line heights are set per size step rather than as a global rule, recognizing that what works for body text does not work for display headlines. Font weights are restricted to four values — regular, medium, semibold, and bold — and we use weight differentiation purposefully to establish hierarchy rather than relying on size differences alone.

The icon library was custom-designed for Cricket Winner rather than pulled from an off-the-shelf set. This was a significant investment but a meaningful one — generic icons make a product feel generic, and Cricket Winner needed to feel distinctly its own. Our designers created over two hundred and forty icons covering navigation, actions, content categories, sports-specific concepts, and marketing illustrations. Every icon is delivered as an optimized SVG, with consistent stroke weights, corner radii, and visual weight across the set. The icons live in a Figma library that the developers consume directly, ensuring that what is in Figma matches what is in production.

The spacing system uses a four-pixel base unit with a curated scale of permitted values: 4, 8, 12, 16, 20, 24, 32, 40, 48, 64, 80, 96. This restriction prevents the visual chaos that emerges when designers and developers freely choose arbitrary spacing values. Padding, margins, gaps between elements — everything draws from this scale. The result is a layout system that feels coherent at every zoom level and on every screen size.

The component library covers every reusable element in the product. Buttons come in five variants — primary, secondary, tertiary, ghost, and danger — and four sizes — small, medium, large, and extra-large. Each variant and size combination has documented states for default, hover, focus, active, disabled, and loading. Inputs cover text fields, password fields, search fields, multi-line areas, dropdown selectors, multi-select chips, date pickers, time pickers, file uploaders, and toggle switches. Cards have a small set of variants — basic, interactive, dense, expansive — that every screen draws from. Modals and bottom sheets are documented with sizing rules, transition specifications, and accessibility behaviors. Navigation components cover top bars, tab bars, side navigation, breadcrumbs, and pagination.

Beyond the component library, we documented patterns — combinations of components that solve recurring user-experience problems. The empty-state pattern, applied wherever a list might have zero items, specifies an illustration, a heading, a body sentence, and an optional call-to-action. The loading-state pattern specifies skeleton screens for content-heavy views and spinners for action-oriented views. The error-state pattern specifies how to communicate failures with appropriate empathy and clear paths to resolution. The form-validation pattern specifies inline error messages, success confirmations, and field-level help text. These patterns are documented in our Figma library and referenced explicitly in every screen design.

The design system is itself versioned with semantic versioning — major versions for breaking changes, minor versions for additions, patch versions for fixes. Major version changes go through a formal review process that involves both designers and developers, since they require coordinated work to update all consuming screens. The system has gone through three major versions during the lifetime of Cricket Winner so far, with each major version reflecting accumulated learning and broader product needs.

The Motion Design System

Motion is its own dimension in the Cricket Winner design system. Every transition, every animation, every micro-interaction has been deliberately designed rather than left to the defaults that frameworks provide. The motion system is built on three principles: motion should communicate state changes, motion should respect user attention, and motion should feel native to the platform.

State change communication means that when something in the interface changes, motion helps the user understand what changed and where. When a new ball is bowled and the score updates, the score number animates rather than snapping, drawing the eye to the change without being startling. When a user places a trade and their portfolio updates, the new position slides into the list rather than appearing instantly. When a notification arrives, it animates in from the top of the screen with a subtle bounce that says "look at me" without being aggressive.

Respect for user attention means we use motion sparingly. There is no decorative animation for the sake of decoration. Every motion serves a functional purpose. If a user is scrolling rapidly through a list, we suppress the per-item animations that would otherwise fire — the user's intent is to scan, not to watch each item appear. If a user is in a deep focus state — for example, watching a live match — we minimize peripheral motion that might distract.

Native feel means animation timings, easing curves, and gesture mechanics conform to platform conventions. iOS uses one set of conventions; Android uses another. Where Flutter's defaults already match these conventions, we let them. Where the defaults fall short — for instance, the defaults for tab transitions in some Flutter widgets do not exactly match iOS — we override them with custom animations.

image.png
I

image.png

Day 35

Tech Stack Deep Dive

Every technology decision in Cricket Winner was made deliberately, with documented tradeoffs and explicit reasoning. We do not chase trends and we do not pick stacks because they are fashionable. We pick them because they are right for the problem. Here is the deep reasoning behind every major choice.

Mobile: Flutter

We chose Flutter for the mobile applications — both the user-facing app and the future delivery and admin apps planned in the roadmap — for several compounding reasons. First, Flutter offers genuine cross-platform parity. Unlike React Native, where iOS and Android often diverge in subtle ways requiring platform-specific patches, Flutter's rendering engine produces identical output across both platforms. Second, Flutter's performance profile is substantially better than React Native for animation-heavy interfaces, which Cricket Winner has in abundance — score tickers, trading charts, news carousels, all moving and transitioning constantly. Third, the Flutter team's investment in tooling — DevTools, hot reload, the widget inspector — significantly accelerates iteration speed compared to native development. And fourth, Flutter's null-safety guarantees and strong type system, combined with Dart's modern language features, produce more maintainable codebases than the alternatives.

We also evaluated native development, native iOS in Swift and native Android in Kotlin. We rejected this option because it would have required two separate development streams, doubled the engineering cost, and slowed feature parity between platforms. For a startup operating with a finite runway, Flutter's economics are simply better. The performance ceiling of Flutter is high enough that we can match or exceed the perceived smoothness of native apps for the vast majority of user interactions, and the few cases where native truly matters — say, advanced video playback or hardware-specific features — Flutter's platform channel system gives us a clean escape hatch.

Web: Next.js

For the web platform, Next.js was the obvious choice. The web product had three primary use cases: a destination for desktop users who prefer the larger screen, an SEO-driven content platform where Cricket Winner's news articles would rank on Google, and an admin dashboard for the operations team. Next.js handles all three workloads better than any alternative we considered.

Server-side rendering in Next.js means our cricket news articles are fully crawlable by search engines, with metadata, structured data, and content present in the initial HTML response. We did not have to build a separate SEO strategy on top of a single-page-app shell. Static site generation lets us pre-render evergreen content like player profile pages and team pages, serving them from CDN edge nodes with millisecond latency. Incremental static regeneration lets us update those pages every few minutes without full rebuilds. The hybrid SSR/SSG/CSR model in Next.js fits our content workload perfectly.

We considered Remix and SvelteKit as alternatives. Remix is a strong contender, but the ecosystem around Next.js — third-party integrations, hosting solutions, documentation, hiring pool — was simply more mature. SvelteKit is technically elegant but had not yet reached the production-readiness threshold we require for client work at the time of project initiation.

Backend: Node.js with TypeScript

The backend services run on Node.js with TypeScript across the board. The reasoning is straightforward: JavaScript on both ends of the wire reduces context switching for engineers, accelerates onboarding, and lets us share validation logic, type definitions, and utility code between frontend and backend. TypeScript's static type system catches a category of bugs that would otherwise reach production, and the modern TypeScript tooling — ts-node, tsx, modern testing frameworks — produces a developer experience that rivals any backend platform.

Node.js's event-loop-based concurrency model is also genuinely the right fit for Cricket Winner's workload. Most of what our backend does is wait — wait for the database, wait for an external API, wait for a Kafka message, wait for a WebSocket frame. Node.js handles waiting brilliantly. CPU-bound tasks, of which we have very few, can be offloaded to worker threads or to dedicated services in other languages if the need ever arises.

We considered Go, which we use for some other client projects, and we considered Python with FastAPI. Go was rejected because the team's existing JavaScript expertise meant we could ship faster on Node.js without sacrificing performance for our specific workload. Python was rejected because, at the concurrency profile Cricket Winner requires, the GIL-related challenges in Python introduce more friction than benefit.

Database: MongoDB

We chose MongoDB as the primary database for Cricket Winner. This is a less common choice than PostgreSQL, which we use for most of our projects, and the reasoning is specific to this project. Cricket Winner is fundamentally a read-heavy, document-shaped, schemaless workload. Match data, news articles, user feeds, opinion market state — all of these are document-shaped, with nested structures that change shape over time as features evolve.

MongoDB's horizontal scaling story through sharding is also a better fit for our anticipated growth curve than Postgres's primary-replica model. As Cricket Winner scales beyond a single region, we can shard MongoDB collections by user-id or match-id and continue to operate with strong performance. We use MongoDB Atlas as our managed offering, which gives us automated backups, point-in-time recovery, multi-region replication, and a mature monitoring stack out of the box.

For the trading engine specifically, where we need stronger consistency guarantees and ACID transactions across multiple collections, we use MongoDB's multi-document transaction support, which has matured significantly over the past several years. For the most critical financial operations — wallet credits and debits, trade settlements — we layer additional safeguards including idempotency keys, audit logs in a separate collection, and a reconciliation job that runs every fifteen minutes to detect any drift.

Real-Time: WebSockets

For real-time features, we use WebSockets with the Socket.IO library. The choice between raw WebSockets, Socket.IO, and alternatives like Server-Sent Events came down to feature requirements. SSE is unidirectional, which doesn't fit our trading and chat features. Raw WebSockets are perfect for the wire protocol but lack the room-based abstractions, automatic reconnection, and fallback transport mechanisms that Socket.IO provides out of the box.

We architected our WebSocket layer as a dedicated microservice that sits behind a sticky-session load balancer. Clients connect to the WebSocket gateway, authenticate, subscribe to topic channels — for example, a specific match's score channel or a specific opinion market's price channel — and receive updates pushed from upstream services via Kafka. This decoupling means the score-publishing service and the WebSocket gateway are independently scalable, and a WebSocket gateway crash doesn't affect score ingestion or storage.

Background Jobs and Event Streaming: Apache Kafka

Kafka is the central nervous system of Cricket Winner. Every high-throughput event in the platform — score updates, news articles, trading events, user actions, notification dispatches, analytics events — flows through Kafka topics. We chose Kafka over RabbitMQ, which we use in many of our other projects, specifically because of its throughput characteristics and its log-based persistence model.

Kafka's ability to handle hundreds of thousands of events per second per topic with sub-millisecond latency is exactly what Cricket Winner needs during peak match-day moments. The log-based persistence means that if a downstream consumer crashes or falls behind, it can replay events from a checkpoint and catch up without losing data. This is crucial for our analytics and audit workloads. Kafka's consumer group abstraction also lets us scale individual consumer services horizontally without coordination — perfect for our microservices-heavy architecture.

We use AWS Managed Streaming for Kafka (MSK) as our managed offering, which removes the operational overhead of running Zookeeper and broker clusters ourselves while still giving us full Kafka API compatibility. Our topics are partitioned strategically — by match-id for score events, by user-id for trading events, by article-id for news events — to ensure ordering guarantees within the relevant scope while still allowing parallel consumption.

Architecture: Microservices

We adopted a microservices architecture from day one. Our service map at V1 launch had seven primary services: Auth Service, Match Service, News Service, Trading Service, Wallet Service, Notification Service, and Admin Service. Each service has its own database (or own collection within MongoDB), its own deployment pipeline, its own scaling policy, and its own on-call ownership.

Inter-service communication happens primarily through Kafka for asynchronous events and through HTTP/REST for synchronous queries. We considered gRPC for inter-service calls and may migrate critical paths to it in the future, but at V1 we prioritized engineering velocity and chose REST with strict OpenAPI contracts. Every service exposes its API via an OpenAPI specification, and we use code generation to produce typed clients in Node.js, ensuring that consumers cannot accidentally call APIs incorrectly.

Deployment: AWS

Cricket Winner runs entirely on AWS. We use ECS Fargate for our microservices, which gives us serverless container orchestration without the overhead of running Kubernetes ourselves. For some workloads where we need more control — specifically the WebSocket gateway with its sticky-session requirements — we use EC2 with auto-scaling groups behind an Application Load Balancer.

MongoDB Atlas runs in a peered VPC for low-latency database access. AWS MSK provides our Kafka layer. CloudFront sits in front of all static assets and provides CDN edge caching for media, news article images, and pre-rendered Next.js pages. Route 53 handles DNS with health checks and failover routing. CloudWatch and AWS X-Ray provide our metrics and distributed tracing layer, which we augment with Grafana for dashboards and Sentry for error tracking.

Testing: A Three-Tier Strategy

Our testing strategy spans unit tests, integration tests, and production smoke tests. Unit tests cover business logic in isolation using Jest for backend code and the Flutter test framework for mobile. Integration tests run against ephemeral test environments spun up per pull request using Terraform-managed AWS environments, validating end-to-end flows across services. Production smoke tests run continuously against the live environment, hitting critical paths like login, score retrieval, and trading order placement, alerting on-call engineers within seconds of any regression.

Live Score website design - Xenotix labs
Day 42

Architecture & System Design

The Cricket Winner architecture is best understood as a series of concentric layers, each with a specific responsibility, communicating through well-defined contracts. We will walk through each layer from the outside in.

The edge layer is the user-facing tier. This includes our Flutter mobile applications running on user devices, our Next.js web application served from CloudFront edge nodes globally, and our admin dashboard accessed by Cricket Winner's internal team. Users connect to this layer through HTTPS for content and over WebSocket connections for real-time data streams. Authentication happens at this layer through JWT tokens, with refresh tokens stored securely on the client and access tokens used for short-lived API requests.

The API gateway layer sits between the edge and the microservices. We use AWS API Gateway for HTTP traffic and a custom WebSocket gateway service for real-time connections. The HTTP gateway handles request routing, rate limiting, request validation against OpenAPI contracts, and authentication token verification. Requests that pass these checks are forwarded to the appropriate downstream microservice. The WebSocket gateway handles connection management, room subscriptions, and the fan-out of events from Kafka topics to connected clients.

The microservices layer contains our seven primary services, each running as a containerized application on ECS Fargate. Services communicate with each other through two channels. Synchronous read queries — for example, the trading service asking the user service for a user's profile — happen over internal HTTP using service-to-service authentication. Asynchronous events — for example, the match service publishing a score update — happen over Kafka. Every service has a clearly documented set of consumed topics and produced topics, making the data flow explicit and traceable.

The data layer is split across several stores. MongoDB Atlas hosts our primary application data: users, matches, news articles, opinion markets, trades, wallets, transactions. We use Redis for short-lived state including session data, rate-limit counters, leaderboards, and a hot cache layer for frequently accessed read paths. AWS S3 stores all media assets — news article images, user avatars, video clips — with CloudFront fronting them for global low-latency delivery. AWS Athena queries archived event logs in S3 for ad-hoc analytics workloads.

The integration layer connects Cricket Winner to the outside world. We integrate with a licensed cricket data feed for live ball-by-ball scoring, with payment gateways for wallet deposits and withdrawals, with KYC providers for identity verification, with Firebase Cloud Messaging for push notifications, with SendGrid for transactional email, and with various analytics platforms for product analytics and marketing attribution.

A dedicated event flow worth highlighting is the live score path because it is the platform's most demanding pipeline. The licensed data feed pushes ball-by-ball updates to our Match Service through a webhook endpoint. The Match Service validates the payload, persists the new ball to MongoDB, computes derived state — current score, run rate, partnership, projected total — and publishes a structured event to the match.score.updated Kafka topic. Multiple downstream consumers subscribe to this topic. The WebSocket gateway service receives the event and pushes it to all connected clients subscribed to that match's room, achieving end-to-end delivery from data feed to user device in under 800 milliseconds. The Trading Service receives the same event and updates the prices of all opinion markets dependent on that match, which in turn flows to the trading WebSocket channel. The Notification Service receives the event and decides whether the moment is significant enough to trigger a push notification. The Analytics Service receives the event for product analytics. This single-source, multi-consumer pattern is repeated throughout the platform and is exactly why Kafka is the right backbone for this workload.

For fault tolerance, every microservice runs at least two replicas across multiple availability zones. Auto-scaling policies expand replicas based on CPU, memory, and Kafka consumer lag metrics. Database connections use connection pools with circuit breakers that fail fast and recover gracefully. WebSocket connections automatically reconnect on the client side using an exponential backoff strategy with jitter. Every external API call is wrapped with retry logic, circuit breakers, and bulkheads to prevent cascading failures.

For observability, every service emits structured JSON logs to CloudWatch, metrics to CloudWatch and Grafana, and distributed traces to AWS X-Ray. We have alerting policies covering availability, error rates, latency percentiles, queue lag, database performance, and business-level metrics like trade settlement success rate. On-call engineers receive alerts through PagerDuty, with runbooks linked to every alert that document the typical causes and remediation steps for that alert type.

Deep Dive: The Opinion Trading Engine

The opinion trading engine deserves a section of its own because of how technically nuanced it is. At its core, the engine implements a continuous double auction market for binary outcome contracts. Each opinion market poses a yes-or-no question — for example, "Will India win this match?" — and users buy and sell positions in either Yes or No contracts. The price of a contract reflects the implied probability of the outcome, oscillating between zero and one hundred as the underlying match progresses and as buy and sell pressure shifts.

The matching engine runs as a dedicated Node.js microservice with its own state machine for each active market. When a market opens, the engine initializes an order book with an empty bid and ask side for both Yes and No contracts. As users submit orders, the engine matches them according to price-time priority — the highest-priced buy order matches against the lowest-priced sell order, with earlier orders taking precedence at the same price. Matches generate trade events that are published to Kafka, settled atomically against user wallets, and reflected in updated order book state that streams back to all connected clients via WebSocket.

The hardest engineering problem in this engine is concurrency. At peak match-day moments, hundreds of users may submit orders against the same market within milliseconds. A naive implementation using database locks would serialize all order processing, throttling throughput to single-digit orders per second. Instead, we adopted an in-memory order book design where each market's order book lives in the working memory of a single matching engine instance, with all order submissions for that market routed to that instance through Kafka partition keys. This single-writer design eliminates lock contention entirely while providing strict ordering guarantees within each market.

To ensure durability, every order and every trade is persisted to MongoDB before the matching engine acknowledges the submission to the client. We use a write-ahead-log pattern: orders are first appended to an immutable orders collection with a sequence number, then the matching engine processes them in sequence-number order. If the matching engine instance crashes, a replacement instance recovers its in-memory state by replaying the orders log from the last checkpoint. We snapshot the order book to MongoDB every thirty seconds, so recovery typically requires replaying at most thirty seconds of orders.

Settlement is the operation that requires the most engineering rigor. When a trade matches, two user wallets must be updated atomically — the buyer's available balance decreases by the trade cost, the seller's balance increases by the trade proceeds, and a position record is created or updated for each user. Any failure mid-operation could leave wallets in an inconsistent state, which for a real-money platform is unacceptable. We use MongoDB multi-document transactions for the settlement operation, ensuring all-or-nothing semantics. Every settlement also writes an entry to an immutable audit log collection that records the trade, the affected wallets, and the resulting balances. The reconciliation job that runs every fifteen minutes verifies that the sum of all wallet balances equals the sum of all deposits minus withdrawals, with any discrepancy alerting the on-call engineer immediately.

Market resolution is the other end of the opinion trading lifecycle. When the underlying event resolves — for example, when the match ends and the "Will India win?" market has a definitive answer — the resolution service marks the market as resolved, closes the order book to new orders, and triggers payout calculations for every open position. Yes-position holders receive one hundred per contract if the answer is Yes; No-position holders receive one hundred per contract if the answer is No. Payouts are queued through Kafka to the wallet service for atomic crediting, with the same audit-log discipline applied to every payout transaction.

Fraud detection runs as a separate service that consumes the trade event stream from Kafka and applies a rule-based and machine-learning hybrid model to flag suspicious patterns. Detected anomalies — for example, the same user trading from multiple accounts in patterns suggesting wash trading, or trades placed within milliseconds of a score update suggesting potential information asymmetry — are surfaced to the operations team through a moderation queue in the admin dashboard. We deliberately keep fraud detection in a separate service rather than embedding it in the matching engine because fraud rules evolve continuously and we do not want every rule update to require a deployment of the most performance-sensitive service in the platform.

Deep Dive: The Live Score Pipeline

The live score pipeline is the most heavily exercised path in the entire platform. Every ball bowled in every active match generates a score update that flows through this pipeline, fanning out to every user watching that match across mobile and web clients. The end-to-end latency budget for this path is one second, and we achieve it with consistent reliability.

The pipeline begins at the licensed cricket data feed, which delivers ball-by-ball events through a webhook integration. The webhook payload includes the match identifier, the over and ball number, the bowler and batsman, the runs scored, any extras, and metadata about the ball outcome. Our match service exposes the webhook endpoint behind an API gateway with strict authentication using HMAC-signed payloads, ensuring that only the legitimate data feed can publish events to our system.

Upon receiving a webhook, the match service performs several operations within a tight time budget. It validates the payload against a JSON schema, deduplicating against any retry-induced duplicate deliveries. It persists the new ball to the match document in MongoDB. It computes derived state — the updated score, the run rate, the current partnership, the projected total based on current run rate — and persists the derived state alongside the ball. It then publishes a structured event to the Kafka topic match.score.updated, with the match identifier as the partition key to ensure all events for a single match are processed in order by downstream consumers.

The Kafka topic has multiple consumers. The WebSocket gateway consumer receives events and pushes them to all clients subscribed to that match's room, formatting the event as a small JSON payload optimized for client-side consumption. The trading service consumer receives the same event and triggers price recalculations for all opinion markets dependent on that match's state. The notification service consumer receives the event and applies notification rules — a wicket falling triggers a different notification than a four being scored, and the rules account for user preferences, time of day, and overall notification frequency for each user. The analytics service consumer receives the event for funnel and engagement analysis. The cache invalidation consumer receives the event and invalidates relevant Redis cache entries so that subsequent API queries see fresh data.

Each consumer operates independently with its own scaling characteristics. The WebSocket gateway is the most performance-sensitive consumer because it has direct user-facing latency implications. We optimize it aggressively: events are batched within a 50-millisecond window for clients subscribed to the same room, JSON serialization is done once per event rather than per recipient, and the WebSocket frames use binary encoding via MessagePack for the most volume-critical update types. These optimizations let a single WebSocket gateway instance fan out a single event to tens of thousands of connected clients in well under one hundred milliseconds.

The trading service consumer is the most stateful because it must atomically update market prices across potentially hundreds of opinion markets per match. We use the same in-memory state machine pattern as the matching engine — each market's price calculation logic runs in the working memory of a single instance, with consistent partitioning ensuring that all events for a market are processed by the same instance.

The notification service consumer applies the most business logic. It evaluates each event against a tree of notification rules — is this event significant enough to notify? does this user have notifications enabled for this match? has this user been notified too frequently in the last hour? is this user currently active in the app, in which case we should suppress the push and rely on in-app banners instead? — and dispatches notifications through Firebase Cloud Messaging with payload deduplication, retry logic, and delivery tracking.

Deep Dive: Caching Strategy

Caching is everywhere in Cricket Winner, applied at multiple layers with different time-to-live policies tuned for the data type. Without aggressive caching, our database tier would be the bottleneck during peak load. With it, the platform handles peak load gracefully while keeping infrastructure costs manageable.

At the edge layer, CloudFront caches static assets — JavaScript bundles, CSS files, images, fonts — with long time-to-live values measured in months. We use cache busting through filename hashing so that new deployments invalidate caches automatically. CloudFront also caches API responses for read-heavy endpoints with short time-to-live values, typically five to thirty seconds, with the Cache-Control header on the API response controlling cacheability per endpoint.

At the application layer, we use Redis as a hot cache for frequently accessed read paths. User profiles, match metadata, news article rendering, opinion market prices — all are cached in Redis with appropriate invalidation strategies. Match data uses a write-through caching pattern: every update to a match document also updates the corresponding Redis entry. News articles use a time-based caching strategy: articles are cached for one minute, with cache invalidation triggered explicitly when the editorial team publishes updates.

At the database layer, MongoDB has its own internal caching of hot working sets, and we size our cluster to keep the active working set in memory. Indexes are tuned for the most common query patterns. We use the explain plan analysis routinely to identify slow queries and either optimize them through index changes or rewrite them to use more efficient query patterns.

A particular caching pattern worth highlighting is the leaderboard cache. Cricket Winner has multiple leaderboards — top traders by profit and loss, most active users, top contributors to community discussion — all of which are computationally expensive to recalculate from scratch. We use Redis sorted sets to maintain leaderboards in real time, updating them incrementally as the underlying events occur. Reading a leaderboard top-100 from Redis takes well under one millisecond, regardless of the total user count.

Deep Dive: Database Schema Design

The MongoDB schema for Cricket Winner is the result of many design iterations and ongoing refinement. The major collections include users, matches, news_articles, opinion_markets, orders, trades, positions, wallets, transactions, notifications, and audit_logs. Each collection has been designed for the dominant query patterns it must serve.

The users collection stores user profiles, preferences, KYC status, and authentication metadata. Embedded documents store frequently-accessed associated data like notification preferences and trading limits. Less frequently accessed data like address history or document metadata is stored in dedicated subcollections with references back to the user document.

The matches collection is the heart of the cricket data model. Each match document stores match metadata, team rosters, current state, and a recent-balls subdocument containing the most recent ten overs of ball-by-ball data for fast access. Older balls are archived to a separate match_balls collection to keep individual match documents small enough for efficient retrieval. Indexes on match identifier, status, and start time support the most common queries.

The opinion_markets collection stores the current state of every active and resolved market. Each document embeds the order book summary, the current bid and ask, recent trade prices, and resolution metadata. The full order book itself lives in the working memory of the matching engine, with the database providing durable persistence and recovery support rather than serving real-time read queries.

The orders and trades collections are append-only, following an immutable event-sourcing pattern. Every order ever submitted is in the orders collection, indexed by user, market, and timestamp. Every trade ever executed is in the trades collection. This immutability simplifies audit and debugging, at the cost of larger collection sizes that we manage through periodic archiving to S3 with Athena providing query access for archived data.

The wallets collection stores current balance information for every user, with a separate transactions collection providing the immutable ledger of every credit and debit. Wallet balance updates always occur within a multi-document transaction that includes the corresponding transaction record, ensuring that the ledger and the balance are always consistent.

Deep Dive: Security Architecture

Security in Cricket Winner is implemented in defense-in-depth fashion, with multiple independent layers each providing protection. The premise is that any single layer might fail, but the probability of multiple layers failing simultaneously is negligibly small.

At the network layer, all traffic between clients and our infrastructure is encrypted using TLS 1.3 with strong cipher suites. Internal service-to-service traffic within our VPC also uses TLS for sensitive paths, particularly anything touching wallet or trading state. AWS security groups restrict network access between services to only the ports and protocols required, with no broad allow-all rules anywhere in the production environment.

At the authentication layer, we use short-lived JWT access tokens combined with longer-lived refresh tokens. Access tokens expire in fifteen minutes, limiting the window of exposure if a token is somehow compromised. Refresh tokens are stored securely on the client — in the iOS Keychain, the Android Keystore, and HttpOnly cookies on the web. Refresh tokens are rotated on every use, and we maintain a server-side blocklist of revoked tokens to handle immediate session termination scenarios.

At the authorization layer, every API endpoint declares its required permissions explicitly, and the API gateway enforces those permissions against the token presented with each request. Role-based access control covers the major user types — regular users, premium users, KYC-verified users, support staff, content editors, system administrators — and fine-grained permissions cover specific operational capabilities.

At the input validation layer, every API endpoint validates its request payload against a strict schema before any business logic runs. We use JSON Schema for validation, with schemas auto-generated from our OpenAPI specifications. Any payload that fails validation is rejected with a clear error message, never reaching the business logic layer where invalid data could cause unexpected behavior.

At the rate limiting layer, every API endpoint has a configured rate limit appropriate to its sensitivity. Authentication endpoints have aggressive rate limits to prevent credential-stuffing attacks. Trading endpoints have rate limits per user to prevent automated trading abuse. Read endpoints have generous rate limits but are still protected against DDoS amplification.

At the data protection layer, sensitive data is encrypted at rest in addition to the in-transit encryption already mentioned. Database encryption uses AWS-managed encryption keys for most data, with customer-managed keys for the most sensitive PII and financial data. We do not store payment card details ourselves — that is delegated to PCI-compliant payment processors — and we minimize the PII we store overall, applying the principle that the safest data is the data we never collect.

At the logging and monitoring layer, security-relevant events generate audit log entries that are stored immutably and shipped to a separate AWS account with restricted access. Failed authentication attempts, permission denials, configuration changes, and admin actions are all logged. Anomaly detection rules surface patterns suggestive of attacks — bursts of failed logins from a single IP, sudden spikes in administrative actions, unusual data export patterns.

We also operate an active security testing program. Automated dependency scanning catches known vulnerabilities in our software dependencies, with a service-level objective to patch critical vulnerabilities within twenty-four hours of disclosure. Static application security testing runs on every pull request, catching common vulnerability classes before code reaches production. Dynamic application security testing runs against the

image.png
I

image.png

Day 49

Development Journey

Building Cricket Winner over five months was an intense, joyful, occasionally painful journey. Looking back across the ten sprints that comprised the project, certain moments stand out as turning points or memorable challenges.

Sprint one was foundation week. We set up the monorepo structure, configured the CI/CD pipelines, established our service-skeleton template, deployed the first "hello world" version of every microservice to a staging environment, and ran the first end-to-end smoke test. By the end of week two, a developer could open a pull request and have it automatically tested, reviewed, deployed to staging, and made available for the QA team to validate — all within twenty minutes. This investment in infrastructure paid dividends every subsequent week.

Sprint two and three focused on authentication and user management. We implemented JWT-based authentication, social login integration with Google and Apple, OTP-based phone authentication using a third-party SMS provider, KYC integration for the trading platform, and the basic profile management screens. The most interesting challenge in this phase was designing the session model in a way that supported simultaneous login on multiple devices while still letting users see and revoke active sessions from a security settings screen.

Sprints four and five built the match engine. This included the integration with the licensed data feed, the match data model in MongoDB, the score-publishing Kafka pipeline, and the WebSocket subscription mechanism. The biggest hurdle in this phase was achieving the sub-second latency target for score updates. Initial implementations were landing at around three seconds end-to-end, which was unacceptable. Through aggressive profiling, we identified bottlenecks at every stage — webhook validation overhead, MongoDB write latency, Kafka producer batching delays, WebSocket event-loop scheduling, client-side render delays. We optimized each one, and by the end of sprint five we were consistently delivering scores in under 800 milliseconds in production-equivalent conditions.

Sprint six was opinion trading core. This was the most architecturally complex sprint of the project. We built the matching engine, the order book data structure, the wallet service, the trade settlement pipeline, and the trading UI in Flutter. The most subtle challenge was designing the order book to handle concurrent operations correctly without resorting to coarse-grained locks that would have killed throughput. We used optimistic concurrency control with version numbers on order book documents, retrying operations on conflict, and we benchmarked the system to handle thousands of trades per second per market with sub-100-millisecond latency.

Sprint seven was the news platform. We built the editorial CMS, the news ingestion pipeline, the article rendering on both mobile and web, the personalization engine that surfaces relevant articles to each user, and the SEO infrastructure for the web articles. The CMS specifically went through three design iterations because the editorial team's needs were more nuanced than initially scoped — they needed scheduled publishing, draft revisions, multi-author collaboration, embedded media, taxonomy management, and a preview environment that exactly matched production rendering.

Sprint eight tackled notifications and engagement loops. We built the push notification service, the segmentation engine that determines who receives which notification, the delivery and tracking pipeline, and the user preference center where individuals can fine-tune their notification settings. We were deliberately conservative with notification frequency — sending five wickets a day to a user who only wanted match start alerts is the fastest way to lose a user — and built rules-based throttling to prevent over-notification.

Sprint nine was payments and withdrawals. We integrated with payment gateways for deposits, built the withdrawal flow with KYC verification gates, designed the transaction history UI, and implemented the daily reconciliation job that detects any discrepancies between our internal wallet ledger and the external payment processor. This sprint also included a thorough security review of the entire payment flow, with penetration testing performed by a third-party security firm.

Sprint ten was the final stretch — admin dashboard, analytics integrations, performance optimization, and pre-launch hardening. We built the admin dashboard for the operations team to manage matches, configure opinion markets, moderate user content, view financial reports, and respond to customer support inquiries. We integrated product analytics, marketing analytics, and business intelligence tooling. We did a final round of performance optimization, achieving sub-2-second cold start times on the mobile app and sub-800-millisecond first contentful paint on the web platform. And we wrote the runbooks, the operational documentation, and the post-launch monitoring playbooks that would be needed to operate the platform in production.

Throughout all ten sprints, we held weekly demos with the founders, monthly business reviews with the broader stakeholder team, and ran two beta testing cycles with closed user groups before the public launch. By the time launch day arrived, the platform had been used by over four hundred beta users across more than a thousand cumulative sessions, with feedback systematically incorporated into the final product.

image.png
I

image.png

Day 56

Key Features Breakdown

Cricket Winner ships with a substantial feature surface area. Here we walk through the most important features and what makes each one distinctive.

Live Match Center is the heart of the app. Each active match has a dedicated live center that shows the current score, ball-by-ball commentary, partnership statistics, projected score, run rate graphs, wagon wheels, pitch maps, player batting and bowling cards, fall of wickets timeline, and a match information sidebar with venue, toss, weather, and umpire details. The entire screen updates in real time as the match progresses, with subtle animations on score changes that draw the eye without being distracting.

Ball-by-Ball Commentary is more than just text. Each ball entry includes the over and ball number, the bowler, the batsman, the runs scored, any extras, the type of dismissal if a wicket fell, and an editorial commentary line that gives context — for example, "Kohli reaches his fifty with a beautifully timed cover drive, his eighty-fifth ODI half-century." The commentary is generated by a hybrid system: structured ball data from the data feed combined with editorial content from Cricket Winner's in-house journalism team, who can override or enhance any ball's commentary in real time through the admin CMS.

Opinion Markets are integrated directly into the match center. While watching a live match, users see a panel of active opinion markets — questions like "Will India win this match?" "Will the next over have a wicket?" "Will Kohli score a fifty?" — with current buy and sell prices for "Yes" and "No" positions. Users can place trades with two taps, see their position in the portfolio tab, and watch the price move in real time as the match progresses. When the market resolves, payouts are credited automatically to the user's wallet within seconds.

News Feed is a personalized timeline of cricket journalism. The default feed mixes breaking news, long-form analysis, player profiles, and historical content, ordered by recency and personalized using each user's reading history, favorite teams, and favorite players. Articles open in a clean reader view with no intrusive ads, support for embedded media, and a smooth reading experience that respects the user's attention.

Match Schedule and Fixtures lists upcoming matches across all major formats and tournaments, with reminder buttons that let users opt into notifications when a match starts. We also provide a results section for completed matches with full scorecards, post-match analysis, and editorial summaries.

Player Profiles are statistical deep dives on individual cricketers, with career statistics, recent form, head-to-head records against opposing teams, and a curated content section of news articles featuring that player. Users can favorite players to receive notifications about milestones, selections, injuries, and other newsworthy moments.

Team Pages mirror player profiles at the team level, with squad lists, recent results, upcoming fixtures, team-specific news, and tournament standings.

Tournament Hubs provide aggregated views of major tournaments — the Indian Premier League, ICC events, bilateral series — with bracket visualizations, group standings, top-performer leaderboards, and tournament-specific news.

User Portfolio is the centralized view of a user's trading activity. Open positions, recent trades, profit and loss across time periods, deposit and withdrawal history, and performance analytics that help users understand their own trading patterns.

Wallet is the financial hub. Users can deposit funds via UPI or net banking, view their available balance and locked balance, request withdrawals, and view a complete transaction history.

Notifications Center is the in-app inbox where users see a chronological list of all notifications they've received, organized by category, with mark-as-read and bulk-clear functionality.

Settings and Preferences lets users control their notification preferences with granular precision, manage their KYC status, change their password, link or unlink social accounts, and download or delete their account data in compliance with relevant data protection regulations.

Web-Specific Features include a public-facing news landing page optimized for search engine indexing, individual article URLs that are crawlable and shareable, and a public match center accessible without an account so casual fans can check scores quickly without onboarding friction.

Search and Discovery is implemented as a unified search experience across matches, players, teams, news articles, and opinion markets. As the user types, results are debounced and fetched from the backend with a typeahead pattern, with category-grouped results that let the user jump quickly to the type of content they want. The search backend is powered by a combination of MongoDB text indexes for primary content and a curated keyword-mapping layer for recognizing common cricket terminology even when spellings differ.

Match Predictions Engine uses historical data and current form to provide predicted outcomes for upcoming matches — win probability for each team, expected scores, and key player matchups to watch. This is not a betting tool but an analytical companion, and it helps users orient themselves before a match begins. The predictions update as match conditions change — toss outcome, weather updates, last-minute lineup changes.

Wagon Wheels and Pitch Maps are interactive visualizations that show, for every batsman and bowler, where they have scored runs and where they have taken wickets. Users can filter by phase of innings, by bowler type, by ground location. These visualizations are popular with serious cricket fans and add depth that casual score apps do not provide.

Career Statistics Comparator lets users select up to four cricketers and see their career statistics side by side, with format-specific filters and charts that visualize the comparison. A frequent use case is comparing players from different eras, which the tool supports through normalized statistical metrics like strike rate adjusted for era and bowling economy adjusted for era.

Social Sharing Hooks are present throughout the product. Any score moment, any news article, any opinion market position can be shared to WhatsApp, Twitter, or Instagram with a pre-formatted card image generated server-side. The card image is designed to be visually striking and contains just enough information to entice click-throughs back to Cricket Winner. This is a meaningful organic acquisition channel.

Referral Program is a built-in viral loop where existing users can invite friends with a personalized referral link. New users who sign up via the link receive a welcome bonus on their first wallet deposit, and the referrer receives a referral bonus once the new user becomes active. The tracking is rigorous — referral attribution is preserved across device installs, browser sessions, and platform changes.

Customer Support Hub integrates a chatbot for common questions, a ticket-based support system for issues that need human attention, a self-service knowledge base for frequently asked questions, and a status page that shows real-time platform health. Support requests are routed automatically to the right team — KYC questions to the compliance team, payment issues to the finance team, technical issues to engineering.

Premium Membership Tier is supported in the architecture from day one even though it launched as a later feature. Premium members receive an ad-light experience, exclusive content from premium contributors, advanced statistics, priority support, and access to special opinion markets reserved for premium tier. The architecture handles tier-based feature gating cleanly, with permission checks at the API level and conditional rendering at the UI level.

 Blogs page website design - Xenotix labs
Day 63

Testing & QA

Quality assurance for Cricket Winner went through three distinct tiers, each with different goals and methodologies.

The first tier is automated testing, embedded in the development workflow. Every pull request must pass unit tests, integration tests, and a static analysis suite before it can be merged. Unit test coverage targets are eighty percent for backend business logic and seventy percent for mobile and web UI logic. Integration tests run against ephemeral test environments and validate end-to-end flows including authentication, score retrieval, trade placement, and wallet operations. Static analysis includes TypeScript strict-mode type checking, ESLint with our internal style guide, and security scanning via Snyk for dependency vulnerabilities.

The second tier is manual exploratory testing, conducted by our QA lead and her team. For every feature shipped, the QA team writes detailed test plans covering happy paths, edge cases, error states, and security scenarios. These plans are reviewed during sprint planning so that developers know exactly what their code will be tested against. Manual testing happens in three environments: a developer-controlled local environment, a shared QA environment that mirrors production, and a pre-production environment that runs the exact build that will be deployed to production. Bugs found during manual testing are filed in Jira with severity classifications, screenshots, and reproduction steps, and routed to the appropriate developer for resolution.

The third tier is beta testing with real users. We ran two beta cycles for Cricket Winner. The first beta involved approximately one hundred internal testers — friends and family of the Cricket Winner team — who used the app for two weeks during a live cricket series and provided structured feedback through an in-app feedback tool. The second beta opened to four hundred external users recruited through cricket fan communities, who used the app during another live series and provided feedback through both the in-app tool and a series of structured surveys. Both beta cycles produced rich qualitative feedback that directly shaped the V1 release.

Beyond functional testing, we ran several specialized testing campaigns. Performance testing included load tests using k6 that simulated up to fifty thousand concurrent users to validate that the platform could handle peak match-day traffic. Security testing included a third-party penetration test of the authentication, payment, and trading flows, which surfaced two medium-severity findings that we resolved before launch. Accessibility testing validated WCAG AA compliance across the web platform and mobile applications. Localization testing, while limited in V1 since the launch was English-only, validated that the architecture would support additional languages without code changes.

Test infrastructure is itself a product we maintained throughout the project. The CI/CD pipeline runs over fifteen hundred automated test cases on every pull request. Test reports are published to a dashboard accessible to all team members. Flaky tests are tracked, quarantined, and either fixed or removed within a sprint of being identified. We do not tolerate ignored or skipped tests because they erode confidence in the test suite over time.

image.png
I

image.png

Day 70

Deployment & DevOps

Cricket Winner runs in production on a foundation that our DevOps team designed for reliability, scalability, and operational simplicity. The deployment pipeline begins when a developer pushes code to a feature branch. Within seconds, the CI/CD pipeline triggers a build that runs all unit and integration tests, builds Docker images for each affected microservice, and stores those images in Amazon Elastic Container Registry tagged with the commit SHA.

When the pull request is merged to the main branch, the same build pipeline runs again, but this time it triggers a deployment to the staging environment automatically. The QA team validates changes in staging, and once they sign off, a deployment to production is initiated through a single-button promotion in our deployment dashboard. Production deployments use a blue-green strategy: the new version is deployed alongside the old, traffic is gradually shifted from old to new while health metrics are monitored, and if any metric crosses a threshold, traffic is automatically routed back to the old version while the team investigates.

Infrastructure is defined as code using Terraform. Every AWS resource — every ECS cluster, every load balancer, every Kafka topic, every IAM role — is declared in version-controlled Terraform configuration. This means infrastructure changes go through the same pull request and review process as application code changes, providing an audit trail and preventing drift between environments.

Secrets management uses AWS Secrets Manager for production credentials and HashiCorp Vault for some highly sensitive values. No secrets are ever committed to the codebase. Local development environments use a developer-specific secrets file that is generated automatically on environment setup.

Monitoring and observability run on a combined Grafana, CloudWatch, and Sentry stack. Grafana dashboards visualize system health, business metrics, and user-facing latencies across every service. CloudWatch handles infrastructure metrics, log aggregation, and alerting. Sentry catches application-level errors with full stack traces, user context, and release tracking. Distributed tracing through AWS X-Ray lets us trace a single request across multiple microservices, identifying bottlenecks and failures with precision.

On-call rotation covers twenty-four-seven coverage during major cricket tournaments and standard business hours otherwise. PagerDuty manages the rotation, with primary, secondary, and escalation tiers. Alerts are tuned to minimize false positives — every alert is actionable, every alert has a runbook, and every post-incident review identifies whether any alerts can be tuned more aggressively.

Backup and disaster recovery procedures are tested quarterly. MongoDB Atlas provides continuous backups with point-in-time recovery up to seventy-two hours. S3 buckets have versioning enabled and replicate to a secondary region. Critical data is also exported daily to a separate AWS account with restricted access, providing protection against credential compromise. Recovery time objectives are documented per data class, with the most critical data — wallet balances and active trades — having an RTO of under fifteen minutes.

image.png
I

image.png

Day 77

Results, Impact & Metrics

While we respect the confidentiality of our clients' specific business metrics, we can share the technical and operational results of the Cricket Winner platform.

On the performance front, the platform consistently delivers live score updates from data feed to user device in under 800 milliseconds, with a 95th percentile of approximately 1.1 seconds. API response times across the platform average 120 milliseconds at the 50th percentile and 380 milliseconds at the 95th percentile. The mobile app's cold start time is under 2 seconds on mid-range Android devices, and the web platform achieves a Largest Contentful Paint score in the green zone on Google's Core Web Vitals.

On the reliability front, the platform has maintained over 99.9% uptime since launch, with the only meaningful incidents related to upstream third-party data feed outages rather than failures in our own infrastructure. We have processed millions of trades and live score events without a single instance of data loss. The reconciliation job that compares internal wallet ledgers against external payment processor records has run thousands of times without finding any discrepancies — a testament to the rigor of the financial systems engineering.

On the scalability front, the platform has handled peak loads twenty times higher than baseline traffic during major matches, with auto-scaling policies expanding compute capacity automatically and contracting it as traffic subsides. We have not had to manually intervene in capacity management since launch.

On the engineering velocity front, the team continues to ship new features on a weekly cadence, with the strong foundation we laid in the first five months paying ongoing dividends. Adding new opinion market types, new tournament integrations, new content categories, and new user-facing features has been straightforward because the underlying architecture was designed for extension.

On the user experience front, qualitative feedback has consistently highlighted the platform's speed, polish, and the integrated experience of having scores, news, and trading in a single coherent app rather than juggling multiple apps. The Cricket Winner founders have shared in our regular reviews that user retention metrics are exceeding their initial projections.

Search page website deisgn - Xenotix labs
Day 84

Lessons Learned

Every project teaches us something, and Cricket Winner taught us a lot. The most important lesson was the value of investing heavily in real-time infrastructure early. We were tempted in the first sprint to take shortcuts on the WebSocket gateway and the Kafka pipeline because they were not user-facing and the founders were eager to see polished mobile screens. Resisting that pressure and building the real-time backbone first meant that every subsequent feature integrated cleanly with it. If we had built the backbone last, we would have been refactoring for months.

The second lesson was the importance of building observability before features. There were temptations along the way to defer logging, metrics, and tracing investments because they did not visibly contribute to the demo. We resisted those temptations, and when production incidents inevitably occurred, we had the visibility to diagnose and resolve them in minutes rather than hours.

The third lesson was that financial systems require a different mindset than typical CRUD applications. The trading engine, the wallet, and the settlement pipeline all required rigor that exceeded what we typically apply to non-financial features. Idempotency, audit logging, reconciliation, and defense in depth were not optional. We have brought lessons from this project into our standard practices for any client work involving payments or financial operations.

The fourth lesson was the value of close client partnership. The Cricket Winner founders made themselves available for daily questions, weekly demos, and rapid decisions when the project required them. This partnership accelerated the project meaningfully — there were never multi-day delays waiting for clarifications, because clarifications happened in real time.

The fifth lesson was the importance of QA as a co-creator rather than a final gate. Embedding our QA lead in design and development from day one produced a higher-quality product than any retrospective testing approach could have achieved.

The sixth lesson was about the cost of rushing decisions on foundational technology choices. We spent more time than felt comfortable in week one debating MongoDB versus PostgreSQL, Kafka versus RabbitMQ, Flutter versus React Native. In retrospect, every minute of that debate was well spent. The cost of changing one of those decisions six months in would have been catastrophic. The cost of taking an extra week to make the right decision was negligible.

The seventh lesson was about communication discipline. The Cricket Winner project benefited enormously from a daily standup that was actually fifteen minutes, a weekly demo that was actually a demo and not a status update meeting, and a set of shared documents that were genuinely kept up to date rather than abandoned after the kickoff phase. These are not glamorous practices, but they are the practices that distinguish projects that ship from projects that drift.

The eighth lesson was about the value of documenting decisions, not just decisions themselves. Every significant architectural decision in Cricket Winner has a documented decision record — a short note explaining the context, the options considered, the decision made, and the reasoning. When new team members join, they can read these records and understand not just what the system is but why it is that way. When we revisit a decision later, we have a record of what we knew at the time, which often clarifies whether the original reasoning still holds.

The ninth lesson was about respecting platform conventions. Cricket Winner runs on iOS, Android, and the web, and each platform has its own conventions for navigation, gesture handling, typography, and behavior. Trying to build one identical experience across all platforms produces a product that feels alien everywhere. Embracing platform conventions where they matter — and only deviating where the deviation serves the user — produces a product that feels native everywhere.

The tenth lesson was about the long-term value of investing in tooling. Every hour we spent on developer tooling — better build pipelines, better local development environments, better debugging tools, better deployment workflows — saved many hours of developer time over the project lifetime. Tooling investments compound, and the earlier they happen, the more they compound.

image.png
I

image.png

Day 91

Conclusion & Future Roadmap

If you are reading this case study because you are evaluating tech partners for a similar project, this section is for you. We want to be candid about what makes Xenotix Labs the right choice for projects of this complexity, and equally candid about the situations where we may not be the right fit.

We are the right fit when you need a partner who treats engineering as a craft rather than a commodity. The decisions we made on Cricket Winner — choosing Kafka over RabbitMQ for high-throughput event streaming, choosing MongoDB over PostgreSQL for the document-shaped read-heavy workload, building a custom WebSocket gateway rather than using a managed service, implementing observability before features — were not the easy decisions. The easy decisions would have been to use whatever was familiar from the last project. The right decisions required research, debate, and conviction. We bring that mindset to every project.

We are the right fit when you need a partner who can hold strategic conversations alongside tactical execution. We do not just take orders. We push back on product decisions when we believe a better path exists. We bring perspective from twenty-plus prior projects across industries. We are happy to be told no, but we will tell you what we think, and that perspective often saves clients from expensive mistakes.

We are the right fit when you need a partner who delivers transparently. Our clients see our backlog, our timelines, our blockers, our progress, and our setbacks in real time. There is no smoke, no mirrors, no agency-style mystique. If something is going wrong, we say so. If something is taking longer than expected, we explain why. If we made a mistake, we own it and we fix it. This transparency is uncomfortable for some clients who prefer the false comfort of a curated weekly status report, but it is the only way we know how to operate.

We are the right fit when you need a partner who builds for the long term. Cricket Winner was not built to ship a demo. It was built to operate for years, to support a business model, to handle scale that the founders are still growing into. Our engineering choices reflect that long-term horizon. We do not take shortcuts that will hurt the founders six months from now. We do not optimize for our own convenience at the expense of the platform's maintainability. We treat every codebase as if we will be living in it for years, because in many cases we are.

We are not the right fit when you need a vendor who will execute a precisely-specified scope without questioning it. We will question. We will suggest alternatives. We will sometimes refuse to build things we think are bad ideas. If you want a body shop that produces deliverables on a fixed-bid contract without engaging with the substance, we are not it.

We are not the right fit when you need the cheapest possible price. We are not expensive by global standards, but we are not the cheapest. We pay our team well because we hire well, and we charge accordingly. If price is the dominant decision factor, you will find cheaper options. We believe the total cost of ownership of a Xenotix-built platform is lower than cheaper alternatives because we ship higher-quality code that breaks less, scales better, and evolves more gracefully — but if upfront price dominates, we may not win on that metric.

We are not the right fit when you have an unrealistic timeline. Cricket Winner took five months because that is what it took to do it well. Could it have shipped in three months? Yes, with significant compromises in scope, quality, or both. We will tell you honestly what a realistic timeline looks like for what you want to build, and if you push for something significantly faster, we will push back. Sometimes clients prefer agencies that promise faster timelines, and we lose those engagements. We are okay with that.

If after reading this you think we sound like a fit, the next step is a conversation. We do a free one-hour discovery call where we listen to your vision, your constraints, your goals, and we tell you honestly whether we are the right partner. If we are, we put together a proposal grounded in concrete deliverables, transparent pricing, and a realistic timeline. If we are not, we recommend partners who might be a better fit. Either way, you walk away with a clearer picture of how to bring your project to life.


18. The Cricket Winner Roadmap — What's Next

Cricket Winner did not stop at V1. The platform continues to evolve, and the roadmap is rich with planned enhancements that the founders, their growing in-house team, and our continuing partnership are working through. While the specifics of unreleased features are confidential, we can share the broad themes.

The first theme is geographic expansion. Cricket Winner launched with a focus on the Indian market but has clear ambitions to serve cricket fans across the diaspora and in other cricket-playing nations. The localization infrastructure is in place; the next step is content and community development for each new market.

The second theme is content depth. The editorial team is expanding, with more journalists, more analysts, more video content, and more long-form features. The content management system is being extended with capabilities for richer multimedia editing, video transcription, and content collaboration.

The third theme is community. The platform is moving beyond solo consumption into community engagement — comments, discussions, fan communities organized around teams or players, collaborative predictions, and social features that let fans engage with each other rather than only with content.

The fourth theme is AI augmentation. Generative AI is finding its way into Cricket Winner in carefully chosen places — match summaries generated from structured data and reviewed by human editors, personalized content recommendations driven by user behavior, conversational interfaces for searching cricket history. AI is a tool, not a replacement for the editorial voice that gives Cricket Winner its character.

The fifth theme is platform extensions. There is interest in a Cricket Winner widget API that lets third-party publishers embed live scores and opinion markets on their own websites. There is interest in a public data API for academic researchers and cricket analysts. There is interest in a developer platform that lets independent builders create complementary experiences on top of Cricket Winner's data.

The sixth theme is responsible expansion of the trading product. The opinion trading platform has shown strong product-market fit, and the natural next steps include more market types, deeper integration with live match moments, and expanded fairness and integrity infrastructure.

We are excited to be Cricket Winner's continuing technology partner through these next chapters.

image.png
I

image.png

Live Score website design - Xenotix labs
Player page website design - Xenotix labs
Blogs page website design - Xenotix labs
 Blogs page website design - Xenotix labs
Blogs page website design mode  - Xenotix labs
Blogs page website design dark theme - Xenotix labs

how it’s wired up.

The technologies we chose and how they fit together to build Cricket Winner.

FlutterNext.jsUIUXCricketindustry
Blogs page website design - Xenotix labs
 Blogs page website design - Xenotix labs
Blogs page website design mode  - Xenotix labs

the artifacts.

figma wireframes, mockups, live screenshots, the whole journey →

 Specific Blog Page website design - Xenotix labs
image.png
Articles website design - Xenotix labs
image.png
ICC MEN T20  - Xenotix labs
image.png
IPL2026 website design- Xenotix labs
image.png
Live Score website design - Xenotix labs
image.png
Player page website design - Xenotix labs
image.png
Blogs page website design - Xenotix labs
image.png
 Blogs page website design - Xenotix labs
image.png
Blogs page website design mode  - Xenotix labs
image.png
Blogs page website design dark theme - Xenotix labs
image.png
Search page website deisgn - Xenotix labs
image.png
Player Page - Dark mode website design - Xenotix labs
image.png

the receipts.

14

build phases documented

4

technologies orchestrated

195

weeks from kickoff to launch

12

design artifacts produced

Blogs page website design mode  - Xenotix labs
Blogs page website design dark theme - Xenotix labs
Search page website deisgn - Xenotix labs
Player Page - Dark mode website design - Xenotix labs
 Specific Blog Page website design - Xenotix labs

The Xenotix team didn't just build an app — they engineered a real-time cricket experience. Our score updates land in under 800ms and the opinion trading engine handles match-day spikes without breaking a sweat. They genuinely understood what cricket fans want. Rating: ⭐⭐⭐⭐⭐ (5/5)

Founder of CW

questions we hear a lot.

We built the V1 production-ready platform in approximately five months, including design, development, testing, beta cycles, and launch preparation. The team consisted of nine people across design, engineering, DevOps, QA, and project management.
Through a combination of horizontal autoscaling on ECS Fargate, predictive scaling that pre-warms capacity before scheduled match times, aggressive caching at the edge and application layers, queue-based load shedding through Kafka, and database connection pooling tuned for spike loads. We have tested the platform at twenty times baseline load with no failures.
Yes, we have experience integrating KYC providers and payment gateways into platforms that handle real-money transactions. We follow defense-in-depth security practices and compliance with applicable regulations, working closely with our clients' legal counsel to ensure full regulatory alignment.
Reach out through our contact page to schedule a free one-hour discovery conversation. We will spend that hour understanding your vision, your constraints, and your goals, and we will tell you honestly whether we are the right partner. If we are, we put together a proposal. If we are not, we recommend partners who might be a better fit.
Search page website deisgn - Xenotix labs
Player Page - Dark mode website design - Xenotix labs
 Specific Blog Page website design - Xenotix labs
Articles website design - Xenotix labs

got something
to build?

WhatsApp us