Introduction: Two machine learning models were developed in an effort to reduce the cognitive load on Database Administrators when fine-tuning a PostgreSQL database system. The first model answers simple questions by utilizing embeddings on a custom-tailored knowledge-base consisting of user manuals, YouTube transcripts, and Reddit/Quora posts. The second model uses a Generative Pre-trained Transformer 2 (GPT-2) model with 124M parameters to answer more complex queries that require understanding of semantic relationships and dependencies between parameter non-independence. The training data to fine-tune the model was generated by ChatGPT-4 and the author by leveraging the first models input-output pairs.