# Text Generation Harness the power of large language models to generate any type of text content - from creative writing and code to structured data formats and technical documentation. ## Basic Usage Generate text responses using our chat completion endpoint: ```python from openai import OpenAI client = OpenAI( base_url="https://api.neura-ai.app/v1" ) response = client.chat.completions.create( model="mistral-medium-latest", messages=[ {"role": "user", "content": "Explain quantum computing in simple terms"} ], temperature=0.7 ) print(response.choices[0].message.content) ``` ## Controlling Output Style ### Temperature Adjust the `temperature` parameter to control creativity vs. consistency: ```python # More focused and deterministic response = client.chat.completions.create( model="gpt-5", messages=[{"role": "user", "content": "What is 2+2?"}], temperature=0.1 ) # More creative and varied response = client.chat.completions.create( model="gpt-5", messages=[{"role": "user", "content": "Write a creative story opening"}], temperature=1.5 ) ``` ### System Messages Guide the model's behavior with system instructions: ```python response = client.chat.completions.create( model="gpt-5", messages=[ { "role": "system", "content": "You are a helpful coding assistant specializing in Python" }, { "role": "user", "content": "How do I read a CSV file?" } ] ) ``` ## Use Cases ### Code Generation ```python response = client.chat.completions.create( model="gpt-5", messages=[{ "role": "user", "content": "Write a Python function to calculate fibonacci numbers with memoization" }], temperature=0.2 ) ``` ### Content Summarization ```python long_article = "..." # Your long text here response = client.chat.completions.create( model="mistral-medium-latest", messages=[{ "role": "user", "content": f"Provide a concise summary of this article:\n\n{long_article}" }], temperature=0.3 ) ``` ### Structured Output Generate JSON or other structured formats: ```python response = client.chat.completions.create( model="gpt-5", messages=[{ "role": "user", "content": "Generate a JSON object with 5 fake user profiles including name, email, and age" }], temperature=0.7 ) ``` ## Streaming Responses For real-time output, enable streaming: ```python stream = client.chat.completions.create( model="gpt-5", messages=[{"role": "user", "content": "Write a poem about the ocean"}], stream=True ) for chunk in stream: if chunk.choices[0].delta.content is not None: print(chunk.choices[0].delta.content, end="") ``` ## Best Practices * Use lower temperatures (0.1-0.3) for factual or deterministic tasks * Use higher temperatures (0.7-1.2) for creative or varied outputs * Include clear, specific instructions in your prompts * Use system messages to set consistent behavior * Consider token limits when working with large inputs or outputs