AI and You = You 2.0
  • Screen Shot 2017-06-19 at 6.48.29 am

“The real problem”, as BF Skinner the social scientist rightly pointed out “is not whether machines think. It is whether men do.”

 

When it comes to technological advancements it would seem Skinner was right on the money. We tend to take an alarmist view, every time we are confronted with something new. Take the arrival of the textile loom in 12th century Europe. Today we recognize this as the harbinger of the industrial revolution. Back then however, it caused riots and the Dutch even destroyed the loom as it caused job losses among sheep farmers.

 

The problem is that while the science has advanced tremendously from the basic loom, we have not managed to advance the argument. As the COO of a company that is using AI to unlock hidden profits in the retail environment, I find much of my time is spent dealing with a variation of a nine-century old question. How will technology (in this case AI) impact jobs? The answer, unfortunately, is also nine centuries old. Like every great advancement in technology, AI is massively disruptive. And like every great technology AI will propel mankind to greater heights of achievement in all spheres of activity be it science or art, physical or mental. There is not a single aspect of our lives that AI will not touch – if it is not already – and greatly enhance. Let me share a couple of examples.

 

A common story in the medical profession is that AI will cause massive job losses among doctors as AI driven apps like Babylon can blitz through databases of hundreds of thousands of patients to quickly diagnose symptoms. However, it took a combination of both human and artificial intelligence to work out the 3D structure of an HIV enzyme in just three weeks. Alone neither could get the job done and together we are greater than the sum of the parts. AI enhances humans as much as human intelligence enhances AI’s abilities.

 

Similarly there is a fair bit of hullabaloo in the legal community about AI’s potential to replace lawyers, or even judges, and while AI is threatening these kinds of white collar jobs, it is going to replace many jobs that require routine mental work, such as writing up contracts. AI is also an excellent assistant to the trial lawyer looking for the one loophole that will exonerate their client. Think back to that classic 1992 Hollywood courtroom comedy My Cousin Vinny. The fate of the case hinged not on Vinny’s knowledge of the law, but on his girlfriend’s expertise of automobiles. Now, you wouldn’t expect most lawyers or their girlfriends to know that much about cars. But given an AI assistant Vinny would still have solved the case. All he would do is feed the critical photographs to his assistant and ask it the question he asked his girlfriend. Bingo, he would have got the exact same answer, without the attitude!

 

That’s the whole point with AI. It will help us as the human race move on to the next level of achievement. Our personal virtual assistants will be constantly by our sides, monitoring us, advising, informing and helping us get better at whatever we do. We’re seeing the initial burst of products in the virtual personal AI space. From Apple’s Siri, Google’s Assistant, Microsoft’s Cortana to Amazon’s Alexa and a few interesting ones from Smartphone manufacturers you can get a teaser of what to expect when you collaborate with AI.

 

But these are early days. Moving forward your personal assistant will become capable of a lot more, will understand you better and tailor itself to creating a better you. The soldier will have her assistant, the musician his. Eventually, she will become a better soldier and he a better musician. We are not there yet, but I believe Tony Stark and Jarvis is the vision of an AI-enabled future. There is no Ironman without Jarvis and no Jarvis without Tony Stark. It is an integrated future full of amazing possibilities So let’s junk the old argument and embrace the brave new world of AI.

 

P.S. One of my business fellow who is a father of 3 already think about how AI can help his kids, and maybe yours, learn smarter.

Below is a summary of insights from the story published in New Scientist entitled The road to artificial intelligence: A case of data over theory

Dartmouth College in Hanover, New Hampshire

  • Team from Dartmouth gathered together to create a new field called AI in 1956 Create fields in: “machine translation, computer vision, text understanding, speech recognition, control of robots and machine learning
  • They took at top-down approach – reason logical approach where you first creating a “mathematical model” of how we might process speech, text or images, and then by implementing that model in the form of a computer program AND that their work further understanding about our own human intelligence.
  • The Dartmouth had two assumptions of AI
    • Mathematical model theories to stimulate human intelligence
      AND
    • Help us understand our own intelligence
  • Both assumptions were WRONG.

Data beats theory!

  • By mid-200s, success came in the form small set of statistical learning algorithms and large amounts of data and that the intelligence is more in the data than in the algorithm – and ditched the assumption that AI would help us understand our own intelligence
  • A machine learns when it changes its behavior based on experience using data which is contrary to the assumptions of 60 years ago, we don’t need to precisely describe a feature of intelligence for a machine to simulate it.
  • For example email spam, every time you drag it into “spam” folder in your Gmail account for example, you are teaching the machine to “classify” spam or everytime you teach for a bunny rabbit and go to images click “bunny rabbit” you are teaching the machine what a bunny rabbit looks like. Data beats theory!
  • For the field of AI, it has been a humbling and important lesson, that simple statistical tricks, combined with vast amounts of data, have delivered the kind of behaviour that had eluded its best theoreticians for decades.
  • Thanks to machine learning and the availability of vast data sets, AI has finally been able to produce usable vision, speech, translation and question-answering systems. Integrated into larger systems, those can power products and services ranging from Siri and Amazon to the Google car.
  • A key thing about data is that its found “in the wild” – generated as a byproduct of various activities – some as mundane as sharing a tweet or adding a smiley under a blog post.
  • Humans (Engineers and entrepreneurs) have also invented a variety of ways to elicit and collect additional data, such as asking users to accept a cookie, tag friends in images or rate a product. Data became “the new oil”.
  • Every time you access the internet to read the news, do a search, buy something, play a game, or check your email, bank balance or social media feed, you interact with this infrastructure.
  • It creates a “Data-driven” network effort a data-driven AI both feeds on this infrastructure and powers it.
  • Risk: Contrary to popular belief these are not existential risks to our species, but rather a possible erosion of our privacy and autonomy as data (public and private) is being leveraged.
  • Winters of AI discontent – the two major winters occurred in the early 1970s and late 1980s
  • AI today has a strong – and increasing diversified – commercial revenue stream

 

At HIVERY, we combine Design Thinking, Learn Start-Up thinking with Machine Learning techniques to develop and release “new to the world” solutions that are intuitive yet power by applying deep science to help solve complex business problems.

 

HIVERY applies artificial intelligence to complex business problems. We do this through our methodology of DISCOVERY, EXPERIMENT and DEPLOYMENT.

 

06]

 

  • Artificial intelligence (AI) includes:
    1. Natural language processing,
    2. Image recognition and classification
    3. Machine learning  (ML) –  so it’s a subset of AI and Deep Learning (artificial neural network –  more below) is a subset of ML
  • In 1950 Alan Turing published a groundbreaking paper called “Computing Machinery and Intelligence”.  Turning poses the question of whether machines can think?
  • He proposed the famous Turing test, which says, essentially, that a computer can be said to be intelligent if a human judge can’t tell whether he is interacting with a human or a machine.
  • Artificial intelligence was coined in 1956 by John McCarthy, who organized an academic conference at Dartmouth dedicated to the topic to explore aspect of learning “cognitive thinking” or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
  • The phrase “machine learning” also dates back to the middle of the last century.  In 1959, Arthur Samuel (one of the attendees of Dartmouth conference) defined machine learning as “the ability to learn without being explicitly programmed.”
  • Samuel to create a computer checkers application that was one of the first programs that could learn from its own mistakes and improve its performance over time.
  • Like AI research, machine learning fell out of vogue for a long time, but it became popular again when the concept of data mining began to take off around the 1990s.
  • Data mining uses algorithms to look for patterns in a given set of information.
  • Machine learning went one step further  – it changes its program’s behavior based on what it learns
  • Years go by with “AI Winters” due to lack of big data sets and computing power
  • Until to IBM’s Watson AI winning the game show Jeopardy and Google’s AI beating human champions at the game of Go have returned artificial intelligence to the forefront of public consciousness
  • Now Machine Learning is used for predicting and classification:
    • Natural language processing – IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data.
    • Image recognition – i.e. people face Facebook with Deep face – https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/
    • Recommender system –  Amazon highlights products you might want to purchase and when Netflix suggests movies you might want to watch and Facebook with newsfeeds.  HIVERY also uses Recommender system to help our customers with having the right products in the right distribution channels at the right time and place.
    • Predictive analytics –  HIVERY around in fraud detection, price strategy and new product distribution placement strategy
  • Deep learning  – often called artificial neural network or neural net is a system that has been designed to process information in ways that are similar to the ways biological brains work.
  • Deep learning uses a certain set of machine learning algorithms that run in multiple layers. It is made possible, in part, by systems that use GPUs to process a whole lot of data at once.

Source: http://www.datamation.com/data-center/artificial-intelligence-vs.-machine-learning-whats-the-difference.html

We are currently working with a client in the FMCG who is trying to unpack better customer segments or “clusters” in order to engage and market to them better.

 

The problem is we are trying to solve is “How might we find customers that allow for better ROI on marketing initiatives?”

 

Its interesting project. Currently most companies that face this challenge. Imagine however the ability to segment customers in a totally new way? Discover common needs or characteristics not humanly possible or conceivable? What if we allowed artificial intelligence to apply its own way to “segment” or “cluster” ?

 

At HIVERY, we apply artificial intelligence to business problems. With this FMCG, we set the challenge to unpack new customer segments for our client using our proprietary machine learning framework, and run A/B market experiments to ‘test’ the effectiveness of these new “machine-conceived” segments.

 

Below is our highlight approach, at the end we are able to create a custom application application to allow the discovery of new segments and communicating and measuring that new segment possible using our proprietary machine learning framework.

 

 

Screen Shot 2016-04-03 at 10.17.44 AM
Gather the data, apply our AI framework (unsupervised learning algorithm), once new clusters are identified work with our client (i.e. domain expertise) to unpack and refine what they mean, communicate on new segments to stakeholders for buy in (i.e. marketing teams), test and measure marketing campaigns using A/B testing method, fine tune marketing actions and deploy wider.

 
So do this? Well, lets talk supervised and unsupervised learning algorithms. Supervised learning is when the dataset you feed your algorithm is done with “pre-define tags”. So this classification algorithm requires training data. Once the “machine learning training” is completed (i.e. a classification model is created), than the classification model is used to classify new datasets and help identify common needs or characteristics (based on pre-define tags). This how most customer segments are done. This is a “this is a female, age 40-50”, every time the machine recognises a “female, age 40-50”, it group them together.

 

 

In unsupervised learning algorithms, there are no “pre-define tags”, hence “”machine learning training” is not done at all. Here we allow the “machine learning” to identify pattens on its own. The way MACHINES SEE DATA is COMPLETELY different to the way HUMANS SEE DATA. This is why at HIVERY we say Data Has A Better Idea™.

 

Here is data set compared to the same cluster by an unsupervised learning algorithms.

Original vs clustered dataset using unsupervised learning algorithm

Original vs clustered dataset using unsupervised learning algorithm

 

 

In our example of “males vs females”, unsupervised learning algorithms might cluster based on a specific characteristic seen in the data itself (beyond what humans segment), instead of “gender” and “age”, it might be some other variable like “likely to commit fraud” or “strong likely to purchase an up-sell” or “will re-purchase within next 4 days”

 

New market segments based on unsupervised learning algorithm

New market segments based on unsupervised learning algorithm approach

 

 

I like Saimadhu Polamuri explanation of the difference between supervised and unsupervised learning algorithms, he talks about basket and it is filled with fresh fruits.

 

The task is to arrange the same type fruits at one place. Assuming the fruits are apple, pomegranate, banana,cherry and grape only.

 

So you already know from your previous learnt knowledge (which was trained in the past) to recognise the shape of each and every fruit so it is easy to arrange the same type of fruits at one place. As these fruits come in, you recognise them arrange or “cluster” the same type of fruits together, forming a different segments. This type of learning is called as supervised learning.

 

 

In unsupervised learning, suppose you are an alien from another world, and had the same basket and it is full with same fruits. Like before your task is to arrange the these one place. But this time you don’t know any thing about “fruits” – you are an alien after all! You are seeing these fruits for the first time, so how will you arrange them? You might decide to select any physical characteristic of that particular fruit. Suppose you take colour. Then you will arrange them base on the colour and go something like this:

  • Red colour group: apple, pomegranate & cherry
  • Green colour group: banana & grapes

Now you will take another physical characteristic like size, so now you groups them things like:

  • Red colour AND big one: pomegranate, apple
  • Red colour AND small one: cherry
  • Green colour AND small one: grapes
  • Green colour AND big one: banana

Here you haven’t learnt any thing before about “fruits” means no train data and non-response variable. This type of learning is know unsupervised learning.

 

Summary:

 

Supervised learning: discover patterns in the data that relate data attributes with a target (class) attribute. These patterns are then utilised to predict the values of the target attribute in future data instances.

 

Unsupervised learning: The data have no target attribute.

 

Both are type of classification techniques.

According to IBM, 90% of worlds data was created just in the last 6 years – 90%! Fuelled by the internet generation, each and every one of us is constantly producing and releasing data. Be it ourselves to companies to capturing customer information and sales transaction. The volumes of data make up what has been designated ‘Big Data’. This massive data sets are piling up year on year.   The problem is how to leverage this data to make better decision?

There are three (3) simple stages one needs to unpack to:  Define the Problem, Solve the Problem, Communicate Actionable Results Clearly.

 

 

 

1. DEFINE THE PROBLEM

 

 

The first, DEFINE THE PROBLEM. Speak with any startup or design thinker and they would say fall in love with the problem not the solution. In fact, Albert Einstein was said If I had only one hour to save the world, I would spend fifty-five minutes defining the problem, and only five minutes finding the solution. Coming up with solutions is easier. Solving the right problem and defining can be challenging. This is no different with big data projects and trying to leverage it to make better insightful decisions.

Framing the problem is about defining the business question you want analytics to answer and identifying the decision you make as a result. Its an pretty important step. If not don’t frame the right problem, no amount data or analysis in the world is going to get you the answers you are looking for.

Defining the problem is split into two parts, framing the problem (what your solving to frame) and reviewing previous findings (what worked or didn’t work) to help you refine the problem.

Framing the problem involves asking yourself “Why is that a problem?” . Toyota famously created the “five why’s” technique. Its about understanding the root cause of the problem. Designs companies like IDEO use phases with the words “How Might We…” to help frame the problem.

reviewing previous findings, involves finding out what worked in the past and why things didn’t work before. This also helps refine the problem.

 

 

 

2. SOLVE THE PROBLEM

 

 

The second stage is SOLVE THE PROBLEM. This often thought to be the primary one. This stage where you starting collecting the right variables (i.e. data fields), collecting sample data to test/play with, and doing some basic analysis to test assumptions quickly. This is also similar process as Cross Industry Standard Process for Data Mining, commonly known by its acronym CRISP-DM.

CRIP-DM is a data mining process model that describes commonly used approaches that data mining experts use to tackle problems.

 

 

800px-CRISP-DM_Process_Diagram

 

 

We at HIVERY we use similar simplified version called DEP – Discovery, Experiment/Pilot and Deployment.

 

 

Screen Shot 2016-03-27 at 3.12.50 PM

 

 

3.COMMUNICATE ACTIONABLE RESULTS CLEARLY

 

 

 

The third and final stage is COMMUNICATE ACTIONABLE RESULTS CLEARLY.  If you want anything to happen as results of stage 1 & 2, you got to communicate your results effectively. If a decision maker do not understand the analysis done or what the results means, he or she won’t be comfortable making a decision based on them. With our “communication-challenged” world, communicating sophisticated analytical results effectively and simply makes the world of different.

 

 

A good data visualisation books on this topic is called Storytelling with Data: The Effective Visual Communication of Information by Cole Nussbaumer Knaflic.

 

 

Data needs to be to engaging, informative, compelling. Human often use stories to communicate effectively and help create memorable knowledge transfer.

Interesting presentation by  Tom Davenport at 2015 Salesforce keynote.

Tom Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a Fellow of the MIT Center for Digital Business, and a Senior Advisor to Deloitte Analytics. He teaches analytics and big data in executive programs at Babson, Harvard Business School, MIT Sloan School, and Boston University. He pioneered the concept of “competing on analytics” with his best-selling 2006 Harvard Business Review article (and his 2007 book by the same name). His most recent book is Big Data@Work, from Harvard Business Review Press.

 

He views are consistent to want we do at HIVERY, we are pioneering what we call “Prescriptive Science™” – what he calls “Analytics 4.0”

 

Below I’ve summarized his talk along with few examples:

 

 

Screen Shot 2016-03-27 at 10.19.18 am

Here is a summary.  He talks about 4 types of analytics:

 

  • Analytics 1.0 was the era of business intelligence –  descriptive, which reports on the past.
  • Analytics 2.0 was about Big Data – which uses models based on past data to predict the future;
  • Analytics 3.0 is the era of data-enriched offerings – which is about being prescriptive. It uses models to specify optimal behaviors and actions.
  • Analytics 4.0  — the idea of automated analytics. These come to full fruition in a new era. Machines talk to machines to carry out decisions within human input.
    • Moving towards “Automated decision from interconnected smart machines”.
    • Connected sensors and the “Analytics of Things”.
    • Things will be “augmented” rather “automated”

 

Screen Shot 2016-03-27 at 10.32.31 am

Examples of established (old) companies leveraging data-driven approaches to model business innovation

 

 

GE 3.0 a 123 years old company

  • Built a new business – $2b initiative in software, analytics and what they call the “Industrial internet” called Predix.
  • Producing “data-based products and services”
  • Add sensors to their industrial products (e.g. gas turbine, jet engines, and trains etc) to understand how they are preforming.
  • Key objective: to revolution services it offers.
  • Why? 75% of its profit comes from industrial services. If its able to estimate when I jet engine is going to break and service it before it breaks, they can make a lot of money. Ability to sell a unit of “jet thrust” instead of jet engine and produce that “jet thrust” for less, they can make more profits. So GE puts sensors on their jet engines to allow them to service them better (i.e. forecast when likely its going to break down) but also create new business model of charging for jet engines

 

Monsanto 3.0 – 114 years old company

  • Creating “frankenfood,” but Monsanto is not the only company that produces genetically modified organisms
  • If we are going to feed 9 billion people, I think we going to need some innovation in agriculture. Its a agricultural biotechnology company
  • It has “Precision planting” or “prescriptive planning” offering to sold to farmers. That is not just selling seeds and pesticides, but sell “advice” to farmers when to plant, what to plant, how much to water, when to put pesticides, when to harvest this year. These are new “data-products”
  • In 2013, it acquired is a huge agritech startup: Climate Corporation for approximately $1.1 billion. The company uses machine learning to predict the weather and other essential elements for agribusiness.
  • It provides “field-level” highly granular weather data – called “FieldScripts” That is provide insight to when its going to rain and when pest going to start, to prescribe when to apply pesticides to farms.
    – Yield with this advice for farmers, increase by 10% to 20%.

 

Ford 3.0 – 112 years old company

  • Bill Ford “The car is really becoming a rolling group of sensors”
  • Ford Creates New Chief Data, Analytics Officer Position
  • Fords Digital analytics and optimisation team – defined Ford’s digital web analytics strategy & standards for all B2C properties providing Ford a competitive advantage of integrated business intelligence and targeted marketing opportunities.  Example of talent include.
  • Targeted marketing – Digital In-Market Manager is responsible for all activities related to targeting and messaging in-market auto shoppers in order to convert them to Ford customers. This includes establishing an “always on” approach to targeting shoppers online. Position responsibilities will include strategy, creative, production, optimization and media planning for all in-market activities
  • Business intelligence – help dealers become more successful by providing smart inventory system: instead of sending new cars to all dealers, now they can figure out what type of car is most likely to sell in particular dealer lot and increase revenue by $100m

LinkedIn – 14 years old company

  • Has a lot “data-products” including “People You may know”, “Jobs You May be Interested in”. “Groups you May Like”
  • Uses it data to determine who is most likely going to buy Linkedin services

 

View it here https://www.salesforce.com/video/183657/ or http://www.tomdavenport.com/blogs-articles/

Lists companies similar to IDEO, Frogdesign, Adaptive Path,  Continuum, Jump, Cooper etc

Credit to this Quora.com post, here is the list…

Product Development & Design

Web & Software Based Product Design

Mobile & Connected Device Design & Development

Service Design Consultancies 

Humanitarian Design & Social Innovation 

Ethnography for Innovation

Design for the Network

I was reading Steve Blank’s post here and it got me thinking about how do you really assess an early stage startup to in order to make business decision (ie invest, buy out, integrate etc)?  What tools or methods are used that offer a common process and consistency?   We can use the Business Model Canvas and assess each of the 9 boxes or use something like VC Fred Wilson’s simple litmus test of 5 drivers (I’ve used Snapchat to illustrate):

  1. Right Person: Who are the people running the startup? Why are they passionate about this idea?  What past experience do they have? Do they have some special competitive advantage or domain expertise? Are they willing to leave their full-time work today to pursue their startup? With Snapchat for example it was Evan Spiegel and Bobby Murphy in 2011 both from Stanford university, who where working on a startup called Future Freshman designed to help high school kids get advice on colleges, career etc.  They later pivoted realizing kids key concern was over their life being recorded permanently and how this would hinder their career prospects.
  2. Right Idea: Is the idea solving a real problem?  Is the solution a vitamin (solving a trivial problem) or drug (solving a serious problem)? With Snapchat it was a big problem amongst teenage: teens don’t want their daily lives permanently recorded
  3. Right Product: Is the product designed well enough to consider the target customers usage and behavior?  Is it simple to use or apply? With Snapchat users can control who and how long any communications was sent. Best of all, it was all on their mobile –  target customer’s prime communication method
  4. Right Time: What market, government or social trends suggest it is the right time to launch such a product? With Snapchat, as more and more teens record their life digitally using social media, they needed another way to communicate freely and safely without permanence.
  5. Right Market: What is the market being targeted? Is the market growing?  Who are the competitors? With Snapchat,  the product centered around mobile-native teens and social media activity  –  both trends expanding rapidly with limited alternatives.

The point I’m trying to make here is there no common language when assessing early stage startups.  This is why I like Steve Blank’s Investment Readiness Level (IRL).   I would like to devote more time on this because its a relatively new concept taken from a relatively establish method.

 

The IRL is based on the Technology Readiness Level (TRL) which is a measure to assess the maturity of an evolving technologies.   The TRL was used in 1980’s by NASA as a  way to describe the maturity and state of “flight” readiness of their technology projects and to help track projects (and budgets).  Steve has adapted this method (similar to that he did with the Business Model Canvas when Alexander Osterwalder created it in his book Business Model Generation) to assess early stage startups.  Providing more of an evidence-based way to determine where the startup is.

 

Here is the NASA  “thermometer” version to assess readiness of their projects:

 

NASA's Technology Readiness Level

NASA’s Technology Readiness Level

 

With the IRL, for the first time, you have common language to describe the reading this level for early-stage ventures.  This method can not only be used by VCs or Accelerators etc, but also inside large companies to help convince an existing business unit its worth integrating or clarity over making it a spin-out.

 

The following slides have been take from Steve Blank’s presentation found here, but here are the key points:

 

  • Level 1 and 2:  Is the main (riskiest assumption) hypotheses understood?  Are all the hypotheses on the business model canvas listed/determined? Have is the startup articulating their customer value proposition?  Is this clear?
 Investment Readiness Level: 1 & 2

Investment Readiness Level: 1 & 2

  • Level 3 and 4:   Have the startup discovered a real problem worth solving and is the solution going to deliver on the customer value proposition they promised?  Does the startup have valid MVP that proves there is a strong Problem/Solution Fit?
Investment Readiness Level: 3 & 4

Investment Readiness Level: 3 & 4

  • Level 5 and 6:  Has the MVP been validated to determine Product/Market Fit? Is the right side of the business model canvas validated?  That is, does the solution delivering on the promise to the target customers?
Investment Readiness Level: 5 & 6

Investment Readiness Level: 5 & 6

  • Level 7 and 8: Is the left side of the business model validated? That is, have they got the right business partners to help them deliver the customer value proposition?
Investment Readiness Level: 7 & 8

Investment Readiness Level: 7 & 8

  • Level 9:  Do they have any investable metrics that matter? These including acquisition rate, activation rate, retention rate, referral rates or revenue?  That is can we assess their A.A.A.R.R?  You can learn more about AAARR here
Investment Readiness Level: 9

Investment Readiness Level: 9

 

Here is a summary of the Investment Readiness Level:

 

Investment Readiness Level

Investment Readiness Level

 

Here is an example of an early startup being assessed using IRL.  You can see this assessment in action in this video here.

 

Assessing a StartUp via IRL

Assessing a StartUp via IRL

Below is a quick summary of things to consider when setting up a partnership such as joint development agreement or licensing.

 

  1. First determine the “Principles of  our partnership”  – who brings what, why and expectations?
  2. Determine what structure is the business to proceed with (i.e. Joint spinout vs licensee agreement)
  3. Determine the business terms (i.e. Term sheet) we would like to operate under
  4. Determine the commercial terms based on terms sheet.

With #2, determine the business terms, you need to determine few things such as.

 

1. Key business terms over IP:

  1. Right to manufacture the product as we see fit
  2. Right to distribute in x geographic territory
  3. Right to use the IP in x field of use
  4. Right to  negotiate licensee at time x and time z
  5. Right to use IP over x time/years
  6. Right to access documentation, service and support for the IP from licenser
  7. Right to sublicense IP to x partners (i.e.  manufacture and design firms)
  8. Right to future versions of the IP (i.e. Any updates or advancements)
  9. Agree to a non-compete provision
  10. Others –  what else do we need to consider ???

Granted, the MORE you add these “exclusive terms” to the license, the more expensive the royalty fee becomes.

You can either therefore:

  • Limit above terms to when we have “achieve certain minimum” product sales only
  • Limit the time to a shorter time period
  • Limit the field of use or  geographic territory etc.

2. Financials

If you agree on the above terms, you than need to consider the financial aspect of your agreement such as:

  1. Upfront fee:  How much should you provide as upfront (if any). For example, shall pay you $X US dollars upon execution of our agreement or is it once you release a beta version or if sales reach a minimum etc? Or others?  What is best for us?
  2. Royalty fee:  For example, should royalties be paid based on X % of “adjusted gross sales” or is it “net prices” or “revenues”?  You need to consider your partner’s  affiliates and sublicensees too.   What minimums on royalties (if any) do you propose? Should put a caps these royalties once sales reach x?  If cap, and they agree, when is it ideally do we to renew it? Annually or at milestone?
  3. Non-sale based fees on income:  what about things that are not directly related to sales, like data cloud usage and storage for your product  If you do, what additional x% do you pay for all non-sale based income?  Again do you cap this?  Is it a step scale?
  4. Minimum royalties:  do you need to be proactive and suggest an annual minimum commitment?  For example, you shall pay annual minimum royalties according a schedule:  2013 x%, 2014 x%, 2015 x% etc. A royalty may begin at say 2% (of the average sales price), but decrease to 0.5 percent over the life of the agreement.

3. Milestones

  1. Payment milestones triggers: Need to consider key milestones from when royalties are made (i.e.  When certain minimum average monthly sales are met or instalments may be timed to coincide with development milestones a beta release??)
  2. Working milestones triggers:  When MVP testing is completed successfully or we have submit a formal business plan, release a beta version or commencement of manufacturing or first 50,000 sale.

4. Questions

  • What leverage do you have that can be used to negotiate a desirable outcome?
  • Is there anything else you can use to leverage your position to get a “good deal”?
  • What specifically do you want to licensee? What aspect of the IP do you want to license?
  • How much should you pay for the IP licensee?  This will again depend on number of exclusivity you set, but need to consider:
    • How much can I afford to pay for this license?
    • What is the maximum % realistically can you pay to make reasonable profit if your product sells for $50?
    • What will our market bear?
  • What data and documents do we need from your business partner to help you develop your product and provide to marketing, manufacturing etc
  • What are performance/warranties/indemnities?  What happens if their is IP defect? Or legal action/claim against your IP partner?
  • What is our exist strategy should we decide, after 1 or 5 years its not meeting business objectives?