HYPV
A pre-launch prototype of a predictive-AI powered hyperpersonalisation platform for
enterprise online retailers — request demo
Materially different from existing solutions (Segmentify, Algolia, Optimizely etc.) in that
it can run as a native online store frontend itself — ingesting maximum signal telemetry and
able to algorithmically vary absolutely any visual or data aspect of content rendered — not
just adding some product discovery to an existing platform UX and running a few A/B tests
This step-change in targeting capability is achieved by maximising the label density assigned to
every data or visual object and tracking how users interact with those labels — adapting the
content rendered based on the weight of the labels and velocity of the interactions over time
— and allowing multiple A/B test variants of any object to exist in parallel
General or RAG-enhanced image classification services can be used to detect labels not already in
the product attribute set (such as :sneaker_high_top or
:skirt_long for clothing) which can further assist the targeting efficiency
In addition to optimising traditional UX, early indications are that this native hypervariation
platform will be particularly effective when used in combination with emerging conversational AI
voice services to drive ecommerce interactions
- Combines product discovery and A/B testing features within bare metal platform
- Uses novel labeling system where all components of a page render (data, visual fragments and
code/algos) have a uniform labeling format with weights and interaction directions/velocities
- Designed primarily for £200M+/yr stores with complex product ranges and high
repeat-visit traffic — unlikely useful outside of that
- Pushes 'hypervariation ⟶ microsegmentation' content targeting concepts to
extremes — without using any third-party tech other than supporting integrations with
external AI/ML services
- Can run as an augmentation API to upgrade an existing platform, as headless frontend over a
traditional platform (Big Commerce, Salesforce Commerce Cloud, Shopify etc.) or as a
standalone platform itself — eventually
- Renders HTML natively as MPA or can seamlessly integrate with any popular frontend/SPA
framework like React
- Proprietary multidimensional data structure that, theoretically, unlocks unparalleled
predictive modeling capabilities
- Enables unlimited object hypervariation — data objects, layout fragments, pages,
even code libraries/algos can have endless alternative variants in parallel existance that are
targetable to highly granular cohorts
- Automatically optimises/AB tests content based on realtime behaviour cues unless attenuated by
manual override rules (such as forcing :high_ltv visitors to see
specific content UX and merchandising at every stage in journey vs
:visit_first_time) or delivering a richer experience when
viewing sponsored brands listed on the site
- No limit on what can be MVT'd — a whole new category taxonomy can be tested against
current without breaking existing CMS config — an entirely new site layout/codebase
can be tested on 5% of traffic
- Targeting packs/multivariate tests can comprise data, visual and logic components together
— for instance: trialling different product imagery, descriptions, PDP layout and
recommend strip logic vs current config as a unified test package
- Similarly, permanent content targeting can be created to show different product creative
and descriptions to :female_under_30 vs
:female_over_30 or showing dynamic model creative based on size
signals
- On the automated side — content entropy can be added by creating manual or synthetic
variants of any object (a product with 5 different main images or a campaign strip with 20
available promotion ads but can only display 4) — the platform can MVT which works
most effectively with which enduser biases — this pattern repeats everywhere
— trial 6 alternative versions of a landing page, A/B test 2 different 'most popular'
algo calcs on category filter etc.
- All targeting logic runs at bare-metal level with supporting caches for maximum performance
(<300ms page render) — these targeting model caches are recalculated every
few hours — the caches can also be A/B tested enabling scenarios such as a
Pepsi-challenge on using AWS vs Azure AI frameworks behind specific features are simple
to perform
- 100Ks concurrent automated MVTs able to exist in parallel delivering vast variation in
UX journeys available once sufficient object entropy and signal telemetry is available
- Almost all product discovery visible (category pages, home page fragments etc.) go through a
targeting layer and are being pushed because of earlier signals from enduser or cohorts
they are being classified as belonging — on a specialist fishing tackle store,
a :sea angler would see minimal :fly
products after first couple of clicks unless indication of wider interest observed
- Continually learning from successful/unsuccessful recommendations as expected — but
also able to run bespoke inference such as assigning synthetic bias label
:vegetarian to enduser after a couple of sessions without buying or
interacting with a :meat labeled product
- The targeting packs have unlimited configurability and refine in realtime — opening out
potential to do things like shifting recommend biases based on whether it is sunny or raining
in London for endusers in area or behave differently on a Monday vs Friday (macro scenarios
that can be tested against control even though they have site-wide impact)
- Able to bias range targeting to lean towards a financial goal-seek objective —
such as prioritising clearing aged-stock over revenue in weeks before end of the half
- 4-dimensional data model that allows variant objects to seamlessly co-exist with control
object and provide maximum signal resolution enabling targeting of content down to 1-v-1
granularity — in additional to normal measurement of quantity of interactions,
the velocity of those interactions is tracked in realtime — and whether any object
involved has been modified over time or is a test variant of the control object
- That high-resolution measurement granularity is able to detect anomalies such as conv rate
drop after a product title change or identify that a variant test object is underperforming
with :urban_male but overperforming for
:rural_female vs control object
- Big focus on changes in velocity of signals, especially negative — as soon as
:size_12 signals drop/stop (but :size_14
signals accelerate) — the realtime loop immediately starts filtering products available
in 12, but not 14 from cache sets — even though it was a primary targeting label for the
enduser in previous months
- The importance of velocity (trending) repeats everywhere as can be more interesting in product
discovery scenarios than simply using static 'top' based calcs — repeat visitors already
knowing the best sellers but maybe not aware of a product emerging from 80th to 30th place
in category in last 7 days
- In addition to velocity, the weight of bias labels is also a key factor in maximising data
available to predictive models — an attribute such as :colour
being significantly less heavy a selection signal as :size
HYPV can be run initially in augmentation mode (like Algolia etc.) where it is only
ingesting telemetry and feeding back some targeted content over API for insertion on existing site
— fails silently if any issue, low risk — at a minimum you likely get some views on
trading position and enduser behaviour not available in other analytics tools
Currently starting limited trials on live sites
contact@uncommerce.com
[Updated: 2024-12-12]