Google Gemini

Gemini logo

Updated for version: March 4th 2024

Accessible via: https://gemini.google.com/app

Requires login via: Google

Ratings

Accuracy / Quality ★★★☆☆

Flexibility / Features ★★★★☆

Data security / Privacy ★☆☆☆☆

Pros/cons

Pros

  • Image recognition
  • Voice input
  • Alternative answer option
  • Internet access
  • Shortcut to check answers with Google search

Cons

  • Substantial hallucination rate
  • No control over data use
  • No standardised user instruction

Description

This Large Language Model (LLM) uses the Gemini 1.5 Pro model with Vision and Voice integration to function. There is a more advanced model available, but this is locked behind a subscription. The accuracy of this model depends on the type of question asked, with a higher accuracy for more basic questions or questions which can be answered using recent information from internet sources. Having the ability to also interpret images and voice does expand its functionality, but the model cannot output any images or sounds. The ability to add sources / validate the output of the model against a typical Google search does add a small level of reliability to the output. And the various draft versions of the output allows for greater variation in the responses.

Gemini was trained on a large amount of data collected from various sources available online, which may include the use of data without the author’s permission. Gemini will also train on the user’s input, which cannot be disabled by the user, making it less data-safe than some of its competitors.

Features and examples

Vision

The Vision component of Gemini allows it to identify objects and information in photographs and other images, using this to supplement its answers. In the following example we submitted a photograph of a package of lab-grown beef in minced meat form, and asked the model to tell us something about this product.

Prompt: What can you tell me about this product?

Answer:

Continuing with our example of the lab-grown meat, we asked the model to check its response against the results obtained through a Google search.

The model then searches for information related to the answer previously given, and adds both a source and a colour code to the output. The source indicates where it received its answers from, and the colour code indicates whether the model was able to find corroborating information (green) or conflicting information (orange). It is still the user’s responsibility to validate the sources and the information, but this can be a helpful tool to help identify some of the more questionable statements. Please note the source is not always a scientific paper or otherwise reliable source. Hence it is important to always validate both the stated information as well as the reliability of the source provided.

In the example given, one statement was found to conflict with the typical response given by a Google search, and was flagged for user review.

Draft answers

The final feature highlighted for this model is the ability to see alternative answers, referred to as ‘drafts’ by Google. These show answers which used the same input, but due to the statistical nature of the model are slightly different in content and/or tone.

With factual answers the variation is typically small, but asking for creative answers (such as poems) can yield a greater variation.