Text-to-* Model Prompt Programming Task: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "<B>Examples:</B>" to "<B>Example(s):</B>")
No edit summary
 
Line 1: Line 1:
A [[Text-to-* Model Prompt Programming Task]] is a [[text-to-* prompt writing task]] that is a [[programming task]] that requires the creation of [[AI model text prompt]]s (for a [[text-to-* model]]) to solve a [[prompt-based text-to-* model inference task]].
A [[Text-to-* Model Prompt Programming Task]] is a [[text-to-* prompt writing task]] that is a [[programming task]] that requires the creation of [[AI model text prompt]]s (for a [[text-to-* model]]) to solve a [[prompt-based text-to-* model inference task]].
* <B>AKA:</B> [[LLM Prompt Engineering]], [[Prompt Design]].
* <B>AKA:</B> [[LLM Prompt Engineering]], [[Prompt Design]], [[Prompt Engineering Task]], [[AI Prompt Creation]].
* <B>Context:</B>
* <B>Context:</B>
** [[Task Performance Measure|measure]]: improve their [[accuracy]] and [[relevance]] for a given [[task]] or [[application]].
** [[Task Performance Measure|measure]]: improve their [[accuracy]] and [[relevance]] for a given [[task]] or [[application]].
** It can (often) be the process of crafting and optimizing instructions or queries given to AI models to elicit the desired response or action.
** It can typically involve [[structured instruction creation]] that guides [[text-to-* model behavior]] toward specific [[text-to-* model output goal]]s.
** It can (often) apply a [[Prompt Engineering Technique]].
** It can typically require understanding of both [[natural language nuance]] and [[text-to-* model capability]] to craft effective [[text-to-* model prompt]].
** It can (often) follow a [[Prompt Engineering Process]], such as an iterative trial and error process.
** It can typically benefit from [[iterative refinement process]] where [[text-to-* model prompt]] is repeatedly adjusted based on [[text-to-* model output evaluation]].
** It can typically influence [[text-to-* model response quality]] through careful selection of [[text-to-* model prompt format]], [[text-to-* model prompt structure]], and [[text-to-* model prompt content]].
** It can typically facilitate the creation of [[consistent text-to-* model response]] through [[text-to-* model constraint application]].
** ...
** It can often be the process of crafting and optimizing instructions or queries given to AI models to elicit the desired response or action.
** It can often apply a [[Prompt Engineering Technique]].
** It can often follow a [[Prompt Engineering Process]], such as an iterative trial and error process.
** It can often require [[domain-specific knowledge]] to effectively communicate [[text-to-* model task requirement]].
** It can often involve establishing [[text-to-* model persona]] to guide [[text-to-* model tone]] and [[text-to-* model expertise level]].
** It can often include [[context window management]] to ensure all relevant [[text-to-* model prompt information]] fits within [[text-to-* model token limit]].
** It can often incorporate [[text-to-* model example selection]] to demonstrate desired [[text-to-* model output pattern]].
** It can often necessitate [[error case anticipation]] to prevent common [[text-to-* model misunderstanding]].
** ...
** ...
** It can range from being [[Text-to-Text Prompting]], to being [[Text-to-Code Prompting]], to being [[Text-to-Image Prompting]], to being [[Text-to-Multi-Modal Prompting]].
** It can range from being [[Text-to-Text Prompting]], to being [[Text-to-Code Prompting]], to being [[Text-to-Image Prompting]], to being [[Text-to-Multi-Modal Prompting]].
** It can range from being [[Manual Prompt Engineering]] (by a [[prompt engineer]]) to being [[Automated Prompt Engineering]] (by a [[prompt engineering system]]).
** It can range from being [[Manual Prompt Engineering]] (by a [[prompt engineer]]) to being [[Automated Prompt Engineering]] (by a [[prompt engineering system]]).
** It can range form being [[Zero-Shot Prompt Engineering]] (of [[zero-shot prompt]]s) to being a [[Example-Including Prompt Engineering]] (of [[example-including prompt]]s).
** It can range from being [[Zero-Shot Prompt Engineering]] (of [[zero-shot prompt]]s) to being a [[Example-Including Prompt Engineering]] (of [[example-including prompt]]s).
** It can range from being a [[Declarative Text-to-* Model Prompt Programming Task]] to being an [[Imperative Text-to-* Model Prompt Programming Task]], depending on its [[text-to-* model instruction style]].
** It can range from being a [[Simple Text-to-* Model Prompt Programming Task]] to being a [[Complex Text-to-* Model Prompt Programming Task]], depending on its [[text-to-* model reasoning requirement]].
** It can range from being a [[Single-Turn Text-to-* Model Prompt Programming Task]] to being a [[Multi-Turn Text-to-* Model Prompt Programming Task]], depending on its [[text-to-* model interaction pattern]].
** It can range from being a [[Domain-General Text-to-* Model Prompt Programming Task]] to being a [[Domain-Specific Text-to-* Model Prompt Programming Task]], depending on its [[text-to-* model specialization level]].
** It can range from being a [[Human-Readable Text-to-* Model Prompt Programming Task]] to being a [[Machine-Optimized Text-to-* Model Prompt Programming Task]], depending on its [[text-to-* model prompt design approach]].
** ...
** ...
** It can be covered in [[Prompt Engineering Training Session]].
** It can be covered in [[Prompt Engineering Training Session]].
Line 18: Line 34:
** It can include crafting prompts with sample input-output examples to clarify model behavior.
** It can include crafting prompts with sample input-output examples to clarify model behavior.
** It can enable [[Meta-Prompts]] where prompts create or modify other prompts for scaling tasks.
** It can enable [[Meta-Prompts]] where prompts create or modify other prompts for scaling tasks.
** It can involve [[text-to-* model system prompt design]] as a foundational layer for [[text-to-* model user prompt]] interaction.
** It can incorporate [[text-to-* model temperature adjustment]] to control [[text-to-* model output randomness]].
** It can utilize [[text-to-* model token economy strategy]] to maximize [[text-to-* model context window usage]].
** It can require [[text-to-* model output format specification]] for structured [[text-to-* model response formatting]].
** It can benefit from [[text-to-* model instruction decomposition]] to enhance [[text-to-* model task comprehension]].
** It can employ [[text-to-* model error handling instruction]] to manage [[text-to-* model edge case behavior]].
** It can integrate [[text-to-* model tool use directive]] to enable [[text-to-* model external resource interaction]].
** It can leverage [[text-to-* model prompt template library]] for consistent [[text-to-* model prompt pattern application]].
** ...
** ...
* <B>Example(s):</B>
* <B>Example(s):</B>
Line 25: Line 49:
**** [[Translation Prompt Engineering]] for [[prompt language conversion]].
**** [[Translation Prompt Engineering]] for [[prompt language conversion]].
**** [[Creative Writing Prompt Engineering]] for [[prompt narrative generation]].
**** [[Creative Writing Prompt Engineering]] for [[prompt narrative generation]].
**** [[Question-Answering Prompt Engineering]] for [[prompt information extraction]].
**** [[Instructional Prompt Engineering]] for [[prompt procedure explanation]].
*** [[Text-to-Code Prompt Engineering]]s, such as:
*** [[Text-to-Code Prompt Engineering]]s, such as:
**** [[Programming Prompt Engineering]] for [[prompt code creation]].
**** [[Programming Prompt Engineering]] for [[prompt code creation]].
**** [[Debug Prompt Engineering]] for [[prompt error identification]].
**** [[Debug Prompt Engineering]] for [[prompt error identification]].
**** [[Technical Documentation Prompt Engineering]] for [[prompt instruction creation]].
**** [[Technical Documentation Prompt Engineering]] for [[prompt instruction creation]].
**** [[Algorithm Design Prompt Engineering]] for [[prompt solution architecture]].
**** [[Code Refactoring Prompt Engineering]] for [[prompt code optimization]].
*** [[Text-to-Image Prompt Engineering]]s, such as:
*** [[Text-to-Image Prompt Engineering]]s, such as:
**** [[Art Prompt Engineering]] for [[prompt visual creation]].
**** [[Art Prompt Engineering]] for [[prompt visual creation]].
**** [[Design Prompt Engineering]] for [[prompt style specification]].
**** [[Design Prompt Engineering]] for [[prompt style specification]].
**** [[Photorealistic Prompt Engineering]] for [[prompt reality simulation]].
**** [[Concept Art Prompt Engineering]] for [[prompt imaginative visualization]].
**** [[Character Design Prompt Engineering]] for [[prompt persona representation]].
** [[Business Prompt Engineering Task]]s, such as:
** [[Business Prompt Engineering Task]]s, such as:
*** [[Strategy Development]]s, such as:
*** [[Strategy Development]]s, such as:
Line 37: Line 68:
**** [[Market Research Prompt Engineering]] for [[prompt insight generation]].
**** [[Market Research Prompt Engineering]] for [[prompt insight generation]].
**** [[Competitive Analysis Prompt Engineering]] for [[prompt market assessment]].
**** [[Competitive Analysis Prompt Engineering]] for [[prompt market assessment]].
**** [[SWOT Analysis Prompt Engineering]] for [[prompt organizational evaluation]].
**** [[Risk Assessment Prompt Engineering]] for [[prompt uncertainty quantification]].
*** [[Content Creation]]s, such as:
*** [[Content Creation]]s, such as:
**** [[Role-Based Prompt Engineering]] for [[prompt consultant simulation]].
**** [[Role-Based Prompt Engineering]] for [[prompt consultant simulation]].
**** [[Memo Creation Prompt Engineering]] for [[prompt executive communication]].
**** [[Memo Creation Prompt Engineering]] for [[prompt executive communication]].
**** [[Product Ideation Prompt Engineering]] for [[prompt innovation development]].
**** [[Product Ideation Prompt Engineering]] for [[prompt innovation development]].
**** [[Marketing Copy Prompt Engineering]] for [[prompt persuasive messaging]].
**** [[Email Template Prompt Engineering]] for [[prompt communication standardization]].
** [[Advanced Prompt Engineering Task]]s, such as:
** [[Advanced Prompt Engineering Task]]s, such as:
*** [[Meta Engineering]]s, such as:
*** [[Meta Engineering]]s, such as:
Line 46: Line 81:
**** [[Task Decomposition Prompt Engineering]] for [[prompt subtask creation]].
**** [[Task Decomposition Prompt Engineering]] for [[prompt subtask creation]].
**** [[Data Analysis Prompt Engineering]] for [[prompt insight extraction]].
**** [[Data Analysis Prompt Engineering]] for [[prompt insight extraction]].
**** [[Multi-Agent Prompt Engineering]] for [[prompt collaborative reasoning]].
**** [[Tool-Using Prompt Engineering]] for [[prompt external resource utilization]].
*** [[Optimization Engineering]]s, such as:
*** [[Optimization Engineering]]s, such as:
**** [[Iterative Prompt Engineering]] for [[prompt response improvement]].
**** [[Iterative Prompt Engineering]] for [[prompt response improvement]].
**** [[Context Enhancement Prompt Engineering]] for [[prompt clarity optimization]].
**** [[Context Enhancement Prompt Engineering]] for [[prompt clarity optimization]].
**** [[Output Adaptation Prompt Engineering]] for [[prompt content refinement]].
**** [[Output Adaptation Prompt Engineering]] for [[prompt content refinement]].
**** [[Token Economy Prompt Engineering]] for [[prompt efficiency maximization]].
**** [[Error Handling Prompt Engineering]] for [[prompt robustness improvement]].
** [[Training Prompt Engineering Task]]s, such as:
** [[Training Prompt Engineering Task]]s, such as:
*** [[Educational Engineering]]s, such as:
*** [[Educational Engineering]]s, such as:
**** [[Overview Prompt Engineering]] for [[prompt basic instruction]].
**** [[Overview Prompt Engineering]] for [[prompt basic instruction]].
**** [[Advanced Prompt Engineering]] for [[prompt optimization technique]].
**** [[Advanced Prompt Engineering]] for [[prompt optimization technique]].
**** [[Workshop Prompt Engineering]] for [[prompt interactive learning]].
**** [[Tutorial Prompt Engineering]] for [[prompt step-by-step guidance]].
**** [[Assessment Prompt Engineering]] for [[prompt learning evaluation]].
*** [[Performance Engineering]]s, such as:
*** [[Performance Engineering]]s, such as:
**** [[Performance Prompt Engineering]] for [[prompt efficiency improvement]].
**** [[Performance Prompt Engineering]] for [[prompt efficiency improvement]].
**** [[Quality Assurance Prompt Engineering]] for [[prompt output verification]].
**** [[Quality Assurance Prompt Engineering]] for [[prompt output verification]].
**** [[Benchmark Prompt Engineering]] for [[prompt comparative assessment]].
**** [[Scalability Prompt Engineering]] for [[prompt processing optimization]].
**** [[Integration Prompt Engineering]] for [[prompt system compatibility]].
** [[Specialized Domain Prompt Engineering Task]]s, such as:
*** [[Medical Prompt Engineering]]s, such as:
**** [[Diagnostic Prompt Engineering]] for [[prompt symptom analysis]].
**** [[Treatment Recommendation Prompt Engineering]] for [[prompt therapeutic suggestion]].
**** [[Medical Literature Review Prompt Engineering]] for [[prompt research synthesis]].
*** [[Legal Prompt Engineering]]s, such as:
**** [[Contract Analysis Prompt Engineering]] for [[prompt agreement evaluation]].
**** [[Legal Research Prompt Engineering]] for [[prompt precedent identification]].
**** [[Case Summarization Prompt Engineering]] for [[prompt litigation distillation]].
*** [[Scientific Prompt Engineering]]s, such as:
**** [[Experiment Design Prompt Engineering]] for [[prompt methodology creation]].
**** [[Data Interpretation Prompt Engineering]] for [[prompt result analysis]].
**** [[Research Question Prompt Engineering]] for [[prompt hypothesis formulation]].
** ...
** ...
* <B>Counter-Example(s):</B>
* <B>Counter-Example(s):</B>
Line 62: Line 120:
** [[LLM Fine-Tuning Task]], where models are retrained with new data rather than refined through prompt-based methods.
** [[LLM Fine-Tuning Task]], where models are retrained with new data rather than refined through prompt-based methods.
** [[Software Code Programming Task]], where [[software code]] is written without utilizing prompt-based input.
** [[Software Code Programming Task]], where [[software code]] is written without utilizing prompt-based input.
* <B>See:</B> [[Prompt Tuning]], [[Prompt-based Learning]], [[Train of Thought]], [[Few-Shot Learning]], [[Transfer Learning]], [[Chain of Thought]], [[Tree of Thought]].
** [[Neural Network Architecture Design]], which involves structuring model components rather than crafting instructions.
** [[Dataset Curation Task]], which focuses on selecting training examples rather than creating inferential prompts.
** [[Hyperparameter Optimization Task]], which tunes model parameters rather than input instructions.
** [[Model Deployment Task]], which configures infrastructure rather than model interaction patterns.
** [[Prompt Tuning Task]], which updates continuous vector representations rather than natural language prompts.
* <B>See:</B> [[Prompt Tuning]], [[Prompt-based Learning]], [[Train of Thought]], [[Few-Shot Learning]], [[Transfer Learning]], [[Chain of Thought]], [[Tree of Thought]], [[Text-to-* AI Model Prompt Development Technique]], [[Prompt Template Library]], [[In-Context Learning]], [[Prompt Engineering Framework]], [[Zero-Shot Prompting]].


----
----

Latest revision as of 21:11, 23 April 2025

A Text-to-* Model Prompt Programming Task is a text-to-* prompt writing task that is a programming task that requires the creation of AI model text prompts (for a text-to-* model) to solve a prompt-based text-to-* model inference task.



References

2024

  • (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Prompt_engineering Retrieved:2024-9-19.
    • Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform:[1] a prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style,[2] providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison house, chat cat, chien " (the expected response being dog), an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style,[3] layout, lighting, and aesthetic.

2024

  • https://youtu.be/T9aRN5JkmL8
    • NOTE: The interview discusses the evolving practice of prompt engineering, focusing on crafting effective instructions for AI models. Participants emphasize that prompt engineering involves clear communication, iterative experimentation, and a deep understanding of how models process input. They highlight how prompt engineering is akin to teaching, requiring anticipation of edge cases and refinement of tasks. While models are becoming better at understanding prompts, the conversation suggests that future interactions may shift toward collaborative efforts, where AI helps users clarify instructions. Overall, prompt engineering remains essential for extracting top performance, particularly in complex or high-stakes scenarios.

2023

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Prompt_engineering Retrieved:2023-6-17.
    • Prompt engineering is a concept in artificial intelligence, particularly natural language processing. In prompt engineering, the description of the task that the AI is supposed to accomplish is embedded in the input, e.g. as a question, instead of it being explicitly given. Prompt engineering typically works by converting one or more tasks to a prompt-based dataset and training a language model with what has been called "prompt-based learning" or just "prompt learning".

2023

Overall, prompt engineering is a key component of developing and optimizing text-to-* models, as it can help to improve their accuracy, relevance, and effectiveness for a given task or application.

2023b

  • (ChatGPT-OpenAi, 2023) ⇒ https://chat.openai.com
    • ... Another term that more specifically reflects the AI and NLP-focused nature of prompt engineering is “prompt programming". Prompt programming refers to the process of creating prompts or queries that are used to elicit specific responses from NLP models. The term "programming" emphasizes the technical nature of the task and suggests a more structured approach to designing prompts tailored to the needs of specific NLP models. ...

2023c

  • (Liu et al., 2023) ⇒ Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. (2023). “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing.” In: ACM Computing Surveys, 55(9).
    • QUOTE: ... Now, as of this writing in 2021, we are in the middle of a second sea change, in which the “pre-train, fine-tune” procedure is replaced by one in which we dub “pre-train, prompt, and predict.” In this paradigm, instead of adapting pre-trained LMs to downstream tasks via objective engineering, downstream tasks are reformulated to look more like those solved during the original LM training with the help of a textual prompt. For example, when recognizing the emotion of a social media post, “I missed the bus today,” we may continue with a prompt “I felt so ” and ask the LM to fill the blank with an emotion-bearing word. Or if we choose the prompt “English: I missed the bus today. French: ”), then an LM may be able to fill in the blank with a French translation. In this way, by selecting the appropriate prompts we can manipulate the model behavior so that the pre-trained LM itself can be used to predict the desired output, sometimes even without any additional task-specific training (Table 1(d); e.g., Brown et al. [9], Petroni et al. [100], Radford et al. [105], Schick and Schütze [120]). The advantage of this method is that, given a suite of appropriate prompts, a single LM trained in an entirely unsupervised fashion can be used to solve a great number of tasks [9, 131]. However, as with most conceptually enticing prospects, there is a catch — this method introduces the necessity for prompt engineering, finding the most appropriate prompt to allow a LM to solve the task at hand. ...

2023d

2022

  • (Zhou et al., 2022) ⇒ Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. (2022). “Learning to Prompt for Vision-language Models.” International Journal of Computer Vision 130, no. 9
    • ABSTRACT: Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming—one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt’s context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.

  1. Cite error: Invalid <ref> tag; no text was provided for refs named language-models-are-multitask
  2. Cite error: Invalid <ref> tag; no text was provided for refs named zapier20230803
  3. Cite error: Invalid <ref> tag; no text was provided for refs named diab