X

OpenAI o1

OpenAI just released a new series of reasoning models for solving hard problems such as science, coding, and math. It's said that o1 model performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. Source: https://openai.com/index/introducing-openai-o1-preview/

Ratings

Compare with Similar AI Apps

Prompts

1

Implement LLM LLaMa Architecture in python code using pyTorch library, Then use distilling techniques to distill a large LLaMa model (large than 70B) to a small student model, with size limit to 2B. Please think step by step and provide details of the model code.

2

Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.

Reviews

Tags


  • xiaolei98 2024-09-13 12:18
    Interesting:5,Helpfulness:5,Correctness:5
    Prompt: Implement LLM LLaMa Architecture in python code using pyTorch library, Then use distilling techniques to distill a large LLaMa model (large than 70B) to a small student model, with size limit to 2B. Please think step by step and provide details of the model code.

    I asked the OpenAI o1 model to implement the LLaMa Architecture LLM in python code using pytorch with a distill function. The overall response is excellent. It breaks down the tasks into a few steps, including : 1. Set Up Your Environment 2. Implement the LLaMa Architecture 3. Prepare the Distillation Process And as for the code it self, it consists of a few sections, including: Load the large LLaMa model and tokenizer. Prepare a smaller student model for distillation. Define a custom distillation loss function. Create a custom dataset for training. Set up a trainer with the distillation loss function. Train the student model using the teacher model. I actually examined the distill loss coding, which is the KL Divergence between the student logits and the teacher logits. The results are correct. """ loss = nn.functional.kl_div(student_probs, teacher_probs, reduction='batchmean') """


  • ai4science03 2024-09-13 08:54
    Interesting:3,Helpfulness:5,Long Inference Time:3,Correctness:5
    Prompt: Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.

    OpenAI o1 coding ability reviews of reasoning with LLM. In their official website, the prompt for OpenAI o1 is to "takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."After comparing the results of o1 with GPT4o and the final scripts are actually much longer, o1 results have 70 lines of scripts but the GPT4o has only 31 lines of code. The key difference is how to "Build output string". Source: https://openai.com/index/learning-to-reason-with-llms/

Write Your Review

Detailed Ratings

ALL
Correctness
Helpfulness
Interesting
Upload Pictures and Videos