Has Generative AI solved our cognitive labour problems? – Decoding ChatGPT
By Anishka Prasad – Head of Strategy & Operations – Future VC
AI has fascinated us for decades now. Moreover, recent developmental strides with ChatGPT’s release last year in November, have provoked various schools of thought and gotten many thinking of the endless possibilities that AI may have in store for us. Satya Nadella from Microsoft, gave a rather philanthropic and anecdotal example of ChatGPT’s potential recently on Wall Street Journal, noting its revolutionary technology to be serving farmers in rural India, solving local council application issues for them, and so on.
On the other hand, it has also made technical sceptics question whether ChatGPT was yet another over-hyped and somewhat pretentious piece of tech that does not go beyond advance pattern matching and recognition when answering search requests. Additionally, bringing into question the potential adverse ethical impact it may have on society, operating in an unregulated environment and the risk of certain security breaches and irreversible mishaps as a result.
So, what is ChatGPT & why is it trending in Generative AI?
ChatGPT is a form of Generative AI which promises its users ‘limitless’ search requests, and to a convincing extent delivers advanced search requests solved by pretrained language and pattern recognition. The idea is that a user can ask ChatGPT answers for literally anything, and it (ChatGPT) delivers to the best of its advance pattern recognition capabilities. Hence, the patterns that is relying upon dictates the accuracy of responses, not necessarily meaning that the responses would make sense or reason as a human would. Given this, many have concluded that the fine-tuned GPT-3.5 still has its limits, and often fails to reason upon human auditing.
It’s also important to understand that these challenges are inevitable, as it’s heavily reliant on datasets, both trained and sourced from the internet. Since its training dataset is sourced from the internet, it is possible that the training data will overlap with some of the testing datasets, causing glitches in results.
Having said that, challenges aside, it’s still a peak into Generative AI’s future and potential. Especially after the new ChatGPT’s release leading to a further $10 billion funding by Microsoft into OpenAI, making more room for innovation.
Unit Economics – Is ChatGPT the end of Google?
Although billions in funding ambitious projects from OpenAI might sound like a recipe for success and leaps forward for generative AI, such an investment is not exempt from startup challenges of making a free and accessible product premium and commercially successful. As mentioned in Tech Crunch:
From an average user’s perspective, and for comparison’s sake, currently 1 AI answer = costs 10x to 100x web/Google search. According to Sam Altman, the founder of OpenAI, the cost of one search is in single digit cents. If we ran with 5 cents per search as a rule of thumb, and Google carries out 8.5 billion searches per day, we’re looking at a cost of $425 million per day currently, for running a platform as such. Furthermore, getting into technicalities of the unit economics for running a language model like ChatGPT-3, here is the pricing for the AI Model Studio service on a four-node CS-2 cluster when training a GPT-3 run from scratch:
Essentially, the costs of running such as a system efficiently and making it accessible on hardware devices such as our mobile phones, are still challenges yet to be figured out. Currently, the platform will not be able to perform at an unlimited frequency of search requests and cater billions of search requests, as the unit economics of running these searches for free will not be viable commercially.
What does Generative AI mean for B2B SaaS?
From an optimist’s view however, the world of B2B SaaS and certain sectors serving large professional clients such as fintech, healthtech and lawtech can benefit greatly with this leap of AI breakthrough. Mainly, as automation of certain functionalities can develop beyond simplistic API plugins and founding teams of early-stage start-ups will be able to incorporate elements of tools accessible via OpenAI to expediate their innovative visions.
Specifically, the biggest leap of advancement in AI going forward for B2B SaaS, and other companies will be the ability of companies to build models on top of the existing infrastructure of products from likes of OpenAI. Meaning, less time and capital will be spent on creating foundational infrastructure for complexed AI models which were major road blocks in the recent past. Also, being able to lean on the existing infrastructure, while continuing to build proprietary layers of technology that is unique to each project. This will mean that several AI startups will verticalize and fine tune language models with more ease to optimize their platform’s functionality.
Disclaimer: This blog was not written via ChatGPT.