FAQ

  • What is Artificial Intelligence (AI)?

    All Artificial Intelligence (AI) technology broadly falls under the category of machine learning (ML).

    AI refers to models that are designed to perform tasks, that typically require humans or animals. These tasks include things like recognising patterns, learning from experiences and understanding natural language.

    AI systems use algorithms (a process and a set of rules) and data to make decisions to perform tasks.

    AI is designed to excel at performing predefined tasks efficiently and often to outperform humans.

    AI is increasingly leveraged in all parts of society, such as in financial services, healthcare and transportation.

    AI is also referenced as the field of research for computer science that develops and studies the intelligence of machines. AI may also refer to the machines themselves.

    AI is multifaceted and continues to evolve and plays an increasingly significant role in various aspects of our lives, making it an exciting and dynamic field.

  • What is Generative AI?

    Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.

    These models rely on transformers and LLMs to work together.

  • What is Artificial General Intelligence (AGI)?

    Sam Altman, CEO of OpenAI defines Artificial General Intelligence (AGI) as “AI systems that are generally smarter than humans”.

    Allegra AI’s take is that “AGI aims to mimic human-like intelligence, in various aspects of our future lives and the expectations are such that AGI is intended to empower humans, to address human-problems”.

    What is in broad agreement, is that AGI is a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge in a wide range of tasks, at a level comparable to human intelligence.

    And that AGI will have profound impacts on the future of humanity. This technology in it’s truest form does not exist, or is not available to the public - yet.

  • Key differences between AI, Generative AI and AGI?

    Artificial Intelligence (AI), Generative AI and Artificial General Intelligence (AGI) are related concepts, but have distinct differences in terms of capabilities.

    • Traditional AI systems are primarily used to analyze data and make predictions, while

    • Generative AI goes a step further by creating new data similar to its training data.

    • In other words, traditional AI excels at pattern recognition, while generative AI excels at pattern creation

    • Artificial General Intelligence (AGI) refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge in a wide range of tasks at a level comparable to human intelligence.

  • What are LLMs?

    Large language models (LLMs) are advanced computer programs trained to understand and generate human-like text.

    LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence. So sometimes produce results that are not factual.

    Because of this inherent predictive nature, LLMs can also fabricate information in a process that researchers call “hallucination”. They can generate made-up numbers, names, dates, quotes — even web links or entire articles.

    They're based on a design called the "transformer" and can do tasks like answering questions, writing, and translating. While they're powerful and versatile, they can sometimes make mistakes or show biases from the data they were trained on. It's important to use them responsibly to avoid misuse or spreading misinformation.

    The future of LLMs raise differing opinions from thought leaders in AI. However, what is probabilistic is that LLMs will be more common in everyday tech, like chatbots and will be tailored to address specific issues within healthcare, law, financial services and manufacturing.

    Some LLM examples by provider are:

    • OpenAI - GPT-4 >chatGPT

    • Google - PaLM > Bard

    • Anthropic > Claude

    • Meta > LLaMA

    • Cohere > Command

  • What is ChatGPT?

    ChatGPT is an artificial intelligence (AI) chatbot that uses natural language processing to create humanlike conversational dialogue —the GPT stands for generative pretrained transformer

    General-Purpose Language Models or NLPs: OpenAI is particularly known for its GPT (Generative Pre-trained Transformer) series, such as GPT-3 and GPT-4. These models are considered state-of-the-art for various natural language processing tasks.

    ChatGPT is built to respond to questions and compose various written content, including articles, social media posts, essays, code and emails. It’s super fun too! You can pretty much ask any question you want, some key tips are: be specific, provide context if building on information, iterate (rephrase) if need to, ask one question at a time!

    In September 2023, a new ChatGP interface was launched, “allowing you to have a voice conversation or show ChatGPT what you’re talking about. Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you”.

  • What are some cool examples of AI?

    Spotify's voice transcription tool Whisper is based on OpenAI's technology, which translates speech into other languages. Its early adopters are podcasters Lex Fridman and Steven Barlett. Open AI say this technology “approaches human level robustness and accuracy on English speech recognition”. There are other tools like this in the market such as Pixel, however this technology will likely be utilised by wider distribution and platform providers.

  • What is NLP?

    Natural Language Processing (NLP) is an AI technology that allows computers, like voice assistants such as Siri and Alexa, ChatGPT, to understand, interpret, and respond to human language.

  • What is AGI alignment?

    In the preparations for superintelligent AI systems a new scientific field of thought leaders has emerged to tackle head on, the questions concerned with ‘Alignment” – which seeks to address and respond to important questions such as ‘how do we ensure the safety of humans, if AI system are much smarter than humans?’ And ‘how do we ensure these systems don’t go rogue and cause serious harm to humans, and our societies, by misuse - resulting in economic disruptions, disinformation, discrimination, over reliance, addictions and other potential harms.

    It is likely that with each technology evolution, we will find the appropriate guardrails and put in place, such that the super AI systems operate with the intended outcomes of human intent and values.

  • What are AI model transformers?

    Transformers radically speed up and augment how computers understand language, leveraging LLMs. Before transformers, the state of the art AI translation methods were recurrent neural networks (RNNs), which scanned each word in a sentence and processed it sequentially.

    A key concept of the transformer architecture is self-attention. This is what allows LLMs to understand relationships between words.

    Transformers process entire sequences at once — be that a sentence, paragraph or an entire article — analysing all its parts and not just individual words. This allows the software to capture context and patterns better, and to translate — or generate — text more accurately. This simultaneous processing also makes LLMs much faster to train, in turn improving their efficiency and ability to scale.

    A cool example of what a transformer has also brought us aside from powering chatbots is the autocomplete on our mobile keyboards and speech recognition in our smart speakers.

    Its real power, however, lies beyond language.

    Transformers can recognise and predict repeating motifs or patterns. From pixels in an image, using tools such as Dall-E, Midjourney and Stable Diffusion, to computer code using generators like GitHub CoPilot. It can even predict notes in music and DNA in proteins to help design drug molecules.

    Until now we have relied on specialised individual models to summarise, translate, search and retrieve. The transformer unified all those actions into a single structure capable of performing a huge variety of tasks.

  • What are AI hallucinations and reinforcement learnings?

    Because AI models are looking for patterns, not fact checkers, sometimes they produce results that are not factual, these are called AI hallucinations.

    Many predict that hallucinations will never be completely erased, however experts are working on limiting them, through a process known as “grounding”.

    This involves cross-checking an LLM’s outputs against web search results and providing citations to users so they can verify.

    Reinforcement learning are unsupervised techniques which allows algorithms to learn tasks simply by trial and error. Models are ‘rewarded’ for successfully performing a task and ‘punished’ for not. With repetition performance improves and will surpass human capabilities, so long as the environments are representative of the real world.

    Reinforcement learning can help AI transcend the natural and social limitations of human labeling by developing previously unimagined solutions and strategies that even seasoned practitioners might never have considered.

    Humans are also used to provide feedback and fill gaps in information — a process known as reinforcement learning by human feedback (RLHF) — which further improves the quality of the output. But it is still a big research challenge to understand which queries might trigger these hallucinations, as well as how they can be predicted and reduced.

  • What is deep learning?

    Deep learning is a sub-set of machine learning and most AI models are training through ‘supervised learning’.

    Deep learning uses large-scale neural networks that can contain millions of simulated “neurons” structured in layers. The most common networks are called convolutional neural networks (CNNs) and recurrent neural networks (RNNs). These neural networks learn through the use of training data and backpropagation algorithms.

  • Generative adversarial networks (GANs)

    In this semisupervised learning method, two networks compete against each other to improve and refine their understanding of a concept. To recognize what birds look like, for example, one network attempts to distinguish between genuine and fake images of birds, and its opposing network attempts to trick it by producing what look very much like images of birds, but aren’t. As the two networks square off, each model’s representation of a bird becomes more accurate.

    The ability of GANs to generate increasingly believable examples of data can significantly reduce the need for data sets labeled by humans. Training an algorithm to identify different types of tumors from medical images, for example, would typically require millions of human-labeled images with the type or stage of a given tumor. By using a GAN trained to generate increasingly realistic images of different types of tumors, researchers could train a tumor-detection algorithm that combines a much smaller human-labeled data set with the GAN’s output.

    While the application of GANs in precise disease diagnoses is still a way off, researchers have begun using GANs in increasingly sophisticated contexts. These include understanding and producing artwork in the style of a particular artist and using satellite imagery, along with an understanding of geographical features, to create up-to-date maps of rapidly developing areas.