Adam Bronte

How different LLMs write "Hello World"

Whenever I'm trying out a new LLM, my go to prompt is always Write a hello world program in python. It's really simple and you can get a sense for how the LLM responds to such a simple request.

How many different ways are there really to write hello world?

Take GPT-4o for example:

Certainly! Here's a simple "Hello, World!" program in Python:

print("Hello, World!")

To run this program, you can save it in a file with a .py extension, for example hello_world.py, and then execute it using a Python interpreter by running python hello_world.py in your terminal or command prompt.

Very simple one line with some explanation text.

Compare this to Gemini 1.5 pro:

print("Hello, world!")

Even more to the point with no explanation text.

Now compare this with llama 3.3 70b:

Hello World in Python

# hello_world.py

def main():
   """Prints 'Hello, World!' to the console."""
   print("Hello, World!")

if __name__ == "__main__":
   main()

Explanation:

  1. The main function is defined to contain the program's entry >point.
  2. The print function is used to output the string "Hello, World!" to the console.
  3. The if __name__ == "__main__": block ensures that the main function is only executed when the script is run directly (not when it's imported as a module).

Running the Program:

  1. Save this code in a file named hello_world.py.
  2. Open a terminal or command prompt and navigate to the directory where you saved the file.
  3. Run the program using Python: python hello_world.py
  4. You should see the output "Hello, World!" in the console.

It's a huge difference between the first two. The code is totally different as well as there is a lot of other text added.

This is usually my go to prompt to quickly try out various models. It's a simple prompt but can go any many directions as seen above.

#AI