1. What is a Prompt?
- A prompt is a set of inputs you provide to guide LLMs (Large Language Models) to generate a response or output.
- Components of a prompt:
- Task/Instruction: What you want the model to do.
- Context: Background or additional information.
- Input Text: The content you want the model to respond to.
2. Types of Prompting Techniques:
- Few-shot prompting: Provide a few examples to help the LLM perform better and adjust its output.
- Zero-shot prompting: A prompt without any examples, such as a sentiment classification task.
- Prompt templates: Include instructions, examples, and specific content/questions tailored to your task.
- Chain-of-thought prompting: Break down complex tasks into intermediate steps for better quality and coherence in the output.
- Prompt tuning: Replace the prompt text with a continuous embedding during training, fine-tuned for specific tasks, making it more efficient than full fine-tuning.
3. What is Prompt Engineering?
- The practice of crafting and optimizing input prompts using appropriate words, phrases, and punctuation for better LLM performance.
- The quality of prompts influences the quality of LLM responses.
- The strategy depends on your task and data.
4. Common Tasks Supported by LLMs on Amazon Bedrock:
- Classification, Question Answering (with or without context), Summarization, Open-ended text generation, Code generation, Math, and Reasoning/Logical thinking.
5. Model Latent Space:
- Latent space is the encoded knowledge in an LLM that stores patterns, relationships, and language knowledge.
- Example: A scuba vacation model trained on data (destinations, dive depths, weather) stores this data in latent space. When prompted (e.g., “want to snorkel with manatees”), the model can generate relevant recommendations using the stored patterns.