There are two places where configuration can be stored.

First, the tool looks for a global configuration file in $HOME/.cgptcodeveloperglobal/ with properties that are likely identical wherever you call the tool.

Second, you can store shell scripts in the directory .cgptcodeveloper that can be used as actions that can be executed by ChatGPT - for instance triggering a build. See below.


In the simplest case like in quickstart where you are using the engine within an OpenAI GPT and with a https tunnel, then you will only need the property gptsecret with a secret you choose that OpenAI authentication will use to authenticate itself to your engine. It's use is described in the GPT setup. Example


Mostly obsolete: if you use it as a ChatGPT plugin, ChatGPT will give you an OpenAI token during plugin registration openaitoken that should be put there.

If you run it directly with https using your own certificate instead of using a https tunnel, there are properties httpsport for the HTTPS port the engine should use, keystorepath and keystorepassword for the keystore, or keystorepasswordpath with a file containing the password, and the domain the engine is reachable with at port 443. So the could look like this:


The scripts in .cgptcodeveloper/

Any shell script called *.sh in the directory .cgptcodeveloper/ can be called by name from ChatGPT. As an example you can use the .cgptcodeveloper/ directory in the engine sources. If you ask ChatGPT Please execute listActions then it'll trigger a request that has the engine look for a script called there, execute it and deliver the output to ChatGPT. In my examle the searches for other scripts in that directory and prints them, so that ChatGPT knows what actions it can execute. If you use that, then put a comment like

# Plugin Action: maven build incl. running unit- and integrationtests

into each script, since any line containing Plugin Action: will be returned as description to ChatGPT.