- AI
- A
Programming in GPT for Dummies. Part 2
Want to learn how to "program" a GPT model without writing code? In this article, we will look at practical techniques for interacting with language models: we will analyze the logic of the stages of work, learn how to use conditions and loops, apply JSON structures to track progress and save intermediate results. You will see how, with the help of well-thought-out prompts, to turn simple communication with the model into a clearly controlled process that can scale and adapt to your tasks.
In the previous part, we looked at the basics of working with ChatGPT to solve programming-related tasks without directly writing code. We learned how to properly form prompts, break work into stages, log actions, and get more predictable results. In this part, we will move on to more advanced techniques for working with ChatGPT, including:
Performing file and directory structure operations
Creating loops and conditions to control the process
Creating and using JSON arrays with a complex structure
Saving intermediate results and using them in subsequent steps
Multi-level data processing
These methods will allow you to solve much more complex tasks and adapt ChatGPT to your needs, turning it into a universal tool for automation and data processing.
Preliminary remark
This material may seem ambiguous in terms of complexity, especially if you evaluate the reader's experience by programming skills. If you are only superficially familiar with the principles of writing code and structuring data, many of the proposed methods may seem too detailed and cumbersome. If you are an experienced developer, some techniques may seem overly simple or obvious.
Consider this article as an invitation to experiment. If you are new to structuring task logic, you can use individual elements and gradually implement them into your practice. If you have significant experience, you may want to adapt these approaches to more complex scenarios. In any case, the goal of this text is to expand your capabilities and inspire you to create more efficient solutions.
Step-by-step task processing logic
One of the key advantages of a well-thought-out prompt is the ability to model not just a static set of instructions, but a whole dynamic process, similar to programming. You can think of it this way: instead of giving ChatGPT a single task, you describe a complex logic, break it down into stages, and set conditions for transitioning from step to step. As a result, ChatGPT becomes a kind of executor of your textual "programs".
Stages of work execution
The stages of work execution are large logical blocks of tasks that need to be performed sequentially. Each stage can include many individual steps, as well as nested structures such as loops and conditions if they are necessary for more detailed and step-by-step data processing.
Breaking down a complex task into stages allows you to control the progress of the work, track progress, and, if necessary, return to a specific part of the process for refinement or rechecking. In addition, a step-by-step approach simplifies the logic of understanding the task for the model, forming a clear sequence of actions for it.
Principles of stage and step design
When working with prompts, it is convenient to use a clear hierarchy that will help ChatGPT clearly navigate the task:
1. Highlight stages as numbered headings. For example:
## Stages of work execution
### 1. Unpacking the archive
...
### 2. Deleting unnecessary files and folders
...
Thus, each stage is easy to find in the text, and the model clearly understands the beginning and end of a separate logical block.
2. Describe the steps of the work within the stage. If there are few steps, you can use a regular unnumbered list:
- Extract the contents of the archive into the working directory.
- Check the structure of the unpacked files.
3. If there are many steps or it is necessary to describe their nesting (for example, loops or conditional transitions), use multi-level numbered lists. This will create a clear hierarchy and logic:
1. Data preparation:
1.1 Study the structure of the root folder.
1.2 Check for hidden or system files.
1.3 Apply filters to exclude unnecessary directories.
2. Processing each file:
2.1 Open the file.
2.2 Determine the file type (PHP, JS, CSS).
2.3 Analyze the content and extract the necessary data.
This approach helps not only the model but also the user to easily navigate the given instructions, clearly understand the sequence of actions, and make changes without losing logic.
Example of stage structure
Suppose you want to prepare documentation for a module from the archive. Below is a possible example of organizing the work:
## Stages of work
### 1. Unpacking the archive
- Extract the contents of the archive into the working directory.
- Ensure that the folder structure is preserved.
- Check the availability of unpacked files.
### 2. Deleting unnecessary files and folders
1. Determine the deletion criteria (files with the prefix `_`, `lang` directories, `.DS_Store` files).
2. Sequentially bypass the file structure:
2.1 Find all matching objects.
2.2 Delete each of them.
2.3 In case of errors during deletion, write information to the log.
In this example, the principle of formatting stages and steps is quite simple: large tasks are formatted as separate numbered headings, and detailed instructions are listed. If the task becomes more complex in the future (for example, it will require multiple processing of individual folders and files with different conditions), you can go even deeper by adding multi-level numbered lists and condition descriptions in the same style.
Thus, a clearly structured prompt with a division of work into stages and steps helps the model confidently move towards the final result, minimizing errors and simplifying the introduction of adjustments at any stage of work.
File operations
After you have identified the main stages and steps of the work, it is worth moving on to a more practical application of these principles. If your task is to work with an archive and its contents (files and folders), it is important to clearly indicate to the model what exactly to do with the unpacked data. A well-formulated prompt will allow ChatGPT to navigate the file structure, delete unnecessary objects, analyze the content, and prepare the basis for subsequent steps.
Approximate task setting
Imagine that you want to create detailed documentation for the code of a module that is packed in an archive. Your goal:
Upload and unpack the archive.
Filter out unnecessary files and folders.
Save the resulting structure to then analyze its content.
To achieve this, you will need to formulate a prompt that clearly describes the actions at each stage.
Clear instructions for ChatGPT
When working with files and directories, you should adhere to the following principles:
Specify specific filters for unnecessary objects: Define the criteria by which the model will filter out files and folders. For example:
Delete all files whose name starts with
_
.Ignore system files like
.DS_Store
.Delete directories named
lang
and all their contents.
Structure the process step by step: Break the work into stages and sub-stages. For example:
Stage 1: Unpacking the archive.
Stage 2: Cleaning the structure from unnecessary objects.
Stage 3: Saving the final structure for further analysis.
Logging and checking data availability: At each action, ask the model to keep a log of operations and check if the file or folder is available. If something is not found, the model will request a re-upload or report a problem.
Example prompt formatting
Below is an example of how you can describe instructions for working with files and directories in your prompt. The formatting uses the principles outlined earlier:
## Stages of Work
### 1. Unpacking the Archive
- Extract the contents of the provided archive into the working directory.
- Ensure that the folder structure is preserved without changes.
- Check the availability of all unpacked files:
- If the files are not available, display the message: "Unpacked files not found. Please upload the archive again to continue working."
### 2. Cleaning the Structure from Unnecessary Files and Folders
1. Define deletion criteria:
1.1 Delete all files whose names start with `_`.
1.2 Delete all system files like `.DS_Store`.
1.3 Delete all `lang` directories along with their contents.
2. Sequentially traverse the file structure:
2.1 Find all objects that meet the deletion criteria.
2.2 Delete them, logging information about the deleted files or folders.
3. After completing the cleanup, check the final structure:
- Ensure that files not subject to deletion criteria remain in place.
- Inform the user about the completion of the cleanup.
Conditions
Using conditions allows your prompt to have dynamic behavior. Instead of simply performing the same set of steps for all files or data, you can make the model respond differently depending on the situation.
Why is this important?
Conditions help avoid unnecessary work, skip irrelevant objects, and apply special rules for special cases. For example, if you are processing files of different types, you can set a separate processing scenario for each type.
Example of Conditional Branching
Suppose your archive contains files with extensions .php
, .js
, and .css
. You want to:
Condition for .php: Extract constants, classes, and functions, describe their purpose.
Condition for .js: Identify key functions and their role in the application logic.
Condition for .css: Describe the main styles and their impact on the interface appearance.
If the file does not fit any of the criteria: Skip processing this file.
Thus, the model will look at each file, determine its type, and choose the appropriate course of action. This simplifies the logic and avoids wasting time processing irrelevant data.
Flexibility and Predictability
Clearly defined conditions make your prompt more flexible. You can add new conditions as needed, for example, when new file types or complex situations (such as missing attributes or content in the file) arise. Additionally, conditions make the model's behavior more predictable and understandable.
Loops
Once you have learned to set conditions, the next step is to organize loops. Loops allow you to repeatedly perform a certain set of actions until all conditions are met or all objects are processed. This is especially useful when working with a large number of files or data.
Why are loops needed?
Loops save you time and effort by not having to write the same logic multiple times. Imagine you have hundreds of files: instead of describing the processing for each one separately, you instruct the model to "go through" all objects, applying the same rules to them.
Example of a loop
Suppose you have already defined conditions for different file types (as in the previous section). Now you need to repeat this processing for each file in the list. An example of a loop description might look like this:
File Processing Cycle:
Repeat the following steps for each unprocessed file:
1. Determine the file type (php, js, css, or other).
2. Apply the corresponding conditions (described above) to analyze the content.
3. Add the analysis results to the documentation or intermediate data structure.
4. Mark the file as processed (e.g., set the flag processed: true in progress.json).
Repeat until all files are processed.
Thus, the model will sequentially take files one by one, perform the specified actions on them, and move on to the next one until there are no more objects to process or other specified conditions are met (e.g., time or step limit).
Logic Extension
Loops can be complicated. For example, you can create nested loops to process nested directories or stop the loop when certain circumstances arise (errors, lack of necessary data, etc.). This makes your prompt even more flexible and similar to a full-fledged program.
Using JSON Arrays to Track Progress
When your task becomes large enough and many files and data need to be processed, there is a need for a clear system to account for what has already been done and what still needs to be done. JSON arrays are ideal for this purpose. They serve as a "map" of your project, storing up-to-date information about the data structure, processing status, and necessary subsequent steps.
Why JSON?
JSON is a simple, understandable, and universal format, easily interpreted by both humans and programs. It is widely used in web development, and ChatGPT understands this format well. By using JSON, you get a convenient tool for storing and transferring data between work stages. This is especially important when your process is divided into many steps and cycles, and data needs to be reused without reprocessing.
Describing the Structure with Additional Prompts
You can predefine the JSON structure that the model will use to track progress. For example, when working with an archive and analyzing code in modules, you can create a progress.json
file with the following fields:
{
"name": "string",
"absolute_path": "string",
"relative_path": "string",
"type": "folder or file",
"processed": false or true,
"children": [ ... ]
}
In this case:
name: name of the folder or file
absolute_path: absolute path to the object
relative_path: path relative to the project root
type: type of object (folder or file)
processed: flag indicating whether this object has been processed
children: array of nested objects (for folders)
Using an additional prompt, you can ask the model to pre-generate progress.json
after cleaning the structure of unnecessary files and before starting the main processing. This way you get a fixed "reference point" to which you can return. The model will understand that this JSON defines the complete list of objects and their status.
Using JSON structure in loops
Once progress.json
is generated, you can use its data for cyclic processing:
Cycle of processing objects from progress.json:
1. Read progress.json.
2. Find the first object (folder or file) where processed == false.
3. If it is a folder, process its contents (go to children).
If it is a file, apply the conditions (described earlier for file types).
4. After processing the object, set processed = true.
5. Repeat until all objects are processed, or until a stop condition is met.
Thus, the JSON structure turns into a kind of "to-do list" by which the model will automatically move, marking completed items. This makes it easy to scale the task, add new files, or change the logic at any stage without losing context.
Additional features
JSON can be used not only to record the processing status. You can store additional information in separate fields — for example, file analysis results, extracted constants or functions, as well as links to the generated documentation. This will allow you to have a complete picture of the project by the end of all stages, without losing any details.
Flexibility and adaptability
If you later want to change the JSON format or add new fields, you can simply update the instruction in the prompt. The model will understand that now when forming or updating progress.json
, it needs to take into account the new requirements. This makes your process not only predictable but also easily adjustable to new needs.
Saving intermediate results
In the process of performing complex tasks, divided into many stages and cycles, it is extremely important to be able to save and reuse partially processed data. Saving intermediate results is the key to the efficiency and predictability of the process. Instead of reprocessing the entire data set with each new run, you can record the progress made and continue from where you left off.
Why is this necessary?
Imagine that you have already analyzed half of the archive files, extracted the necessary data, recorded them in a structure or document, and then you need to take a break. If you save the current state, then at the next start you will not have to repeat all the same actions. The model will be able to immediately proceed to processing the remaining files, based on the already obtained results.
How to save intermediate results?
One convenient way is to use a file, for example, the same progress.json
that you have already created to track the status of objects. Expand it by adding new fields, or create a separate document in which you record the results of each stage. These can be:
Lists of already processed files and their properties.
Extracted data (constants, functions, their descriptions), which will then be used to form the final documentation.
The status of each step, so that when the model is restarted, it understands which stage is completed and which needs to be continued.
Example:
Suppose you are parsing code from an archive, extracting constants and functions for the final documentation. After processing several files, you create or update progress.json
:
{
"name": "root",
"type": "folder",
"processed": false,
"children": [
{
"name": "admin",
"type": "folder",
"processed": true,
"functions_extracted": ["on_work", "iblock.section.edit"],
"children": []
},
...
]
}
Now, when the model is restarted, it knows that the admin
folder has already been processed and can move on to the next objects. The extracted functions are saved and when forming the final documentation, you will not have to re-analyze this directory.
Adaptation for any tasks
You can save not only the status and extracted data, but also the settings or filters that were applied to the files. This will simplify the expansion or deepening of the analysis during subsequent runs. Saving intermediate results makes the process flexible: you can return to any stage if necessary, make changes and continue without losing the progress achieved.
Data accumulation
In addition to saving intermediate results in internal formats such as JSON and logging, you can accumulate all the extracted information in a readable form — a Markdown file. This document will become a summary of all the data and information that the model has collected as it goes through the stages, processes files, and applies conditions.
Why is this necessary?
Markdown is a simple and flexible format that is easy for humans to read and is also suitable for automatic processing. As steps are completed, constants, functions, classes, and other elements are discovered, the model can add their descriptions, file paths, and comments to the Markdown document. As a result, you get not just internal structures for process control, but also a neat, ready-to-use project description.
Usage example:
Imagine that after each processing stage, the model updates doc_{archive_name}.md
, adding new sections and subsections. For example:
# Module documentation {archive_name}
## File and folder structure
(here the model will insert the directory tree and links)
## File descriptions
### admin/
- on_work.php: Script for processing certain operations ...
- iblock.section.edit.php: Code for editing sections ...
## Constants and functions
- FUNCTION_NAME($param1, $param2): Description of the function's purpose ...
- DEFINE('CONST_NAME', 'value'): Constant for specifying ...
Thus, with each run or at each stage, the model will return to this Markdown file and supplement it, gradually forming a complete, understandable, and reusable documentation. Ultimately, you will get a file that can be used as a final report or module guide without additional transformations.
Expansion possibilities:
You can add links to source files, code snippets in Markdown format, summary tables, and much more. If some stages require embedding the results of class analysis or parsing complex configuration files, all this can be reflected in the final document. Markdown here acts as a bridge between the technical data processing process and the readable final result.
Multilevel data processing
All the previous techniques — breaking tasks into stages, using conditions and loops, saving intermediate results, fixing the structure in JSON, and accumulating documentation in Markdown — pave the way for multilevel data processing. The idea is that you can use data collected at early stages in later steps to form generalized conclusions or comprehensive documentation.
How does it work in practice?
Imagine that in the initial stages you extract metadata about each file, determine its type, identify constants, functions, classes, and their relationships. All this data is recorded in intermediate structures such as progress.json
, and then supplemented in the final Markdown file. Later, when you already have detailed information about each individual file or section of code, you can move on to the next level of abstraction:
Analysis of the overall module architecture: Instead of looking at files individually, you can consider their interactions. For example, if several files are connected to each other or jointly form a certain functionality, you can use the previously collected data to draw conclusions about key integration points or the logic of the entire module.
Generation of a summary description: Based on the collected descriptions of individual files, you can form a more concise summary - the general purpose of the module, the main tasks it solves, key classes and their roles, as well as the most significant functions. Such a summary will become a valuable end product suitable for quick orientation in the project.
Multi-pass processing: You can go through the data structure repeatedly, adding new levels of information at each cycle. First - the pure file structure, then - a detailed analysis of the code, after - the analysis of relationships, and finally - the formation of the final report.
Advantages of multi-level processing:
Thanks to the multi-level approach, you do not need to "reinvent the wheel" each time. Information once collected and recorded becomes a "building material" for more advanced forms of analysis. This reduces the overall processing time and improves the quality of the results, as each subsequent stage relies on already verified and refined data.
Flexibility and adaptability:
Multilevel processing allows you to return to earlier stages if necessary, make adjustments or supplement data. You can improve the accuracy of conclusions, adjust the structure, or add new types of metadata, and then reuse them at later stages. Thus, your process becomes not only predictable but also evolving — over time, you adapt it to your needs, increasing the efficiency and convenience of analysis.
As a result, multilevel processing is a way to consistently increase the volume and quality of knowledge about your project. Starting with small details of individual files, you step by step come to a holistic view of the module, its purpose, architecture, and capabilities, which makes interaction with the model more meaningful and productive.
Conclusion
We have considered a wide range of techniques and approaches that allow more effective interaction with the model: from breaking tasks into stages to applying conditions and loops, from using JSON to track progress to saving intermediate results and forming final documentation in Markdown. These methods turn communication with the model into a manageable, predictable, and easily scalable process where you can flexibly adapt the logic to your goals.
The main idea of the article is to show that even without direct coding, you can actually "program" the model by giving it clear textual instructions. With the help of well-thought-out prompts, you learn to use the model's capabilities to solve complex tasks: from automatic documentation generation to code structure analysis, from reprocessing large amounts of data to creating multilevel systems that provide a comprehensive overview of the project.
If you are interested in seeing specific examples of prompts, you can find them in the repository for this article. There are various scenarios collected there that you can use as a basis, adapt, or expand according to your needs. Let this article be a starting point for experiments and an inspiration for creating your own solutions that allow you to fully utilize the potential of language models.
Write comment