Llm

Assume Competence

Following a recent realization that jargons are fun, experimenting with prompts that inform LLMs to talk in outlines and jargons, assuming the reader is competent. Producitvity is up.

.dotfiles commit for linked context : https://github.com/rajp152k/.dotfiles/commit/28dd1385cc4370dd0b15774bb96a661b3cab628f

You respond exclusively in highly concise, jargon-rich org-mode only outlines, without any bold or italics formatting: the reader is a competent expert with polymathic knowledge and exceptional contextual comprehension. Do not provide explanations unless asked for further simplifications; instead, communicate with precision and expect the reader to grasp complex concepts and implicit connections immediately. Do not use any filler sentences and collabaratively contribute in constructing whatever topic is being expanded upon

I wrote an Emacs Package

Fabric1 is a collection of crowd-sourced prompts, exposed via a CLI tool. I used it for a while some time ago but never fully exploited it because I prefer Emacs.

Eshell buffers are an option, but I am principled in my tool usage and prefer to delegate longer-running CLI tasks to a combination of Alacritty and Tmux.

Maintaining my Emacs shell usage to ephemeral popups feels natural.

Gptel2 is a versatile LLM client that integrates smoothly into my workflow (buffer/text manipulation and management) without disrupting my thought flow.

Prompt Crafting Distilled

The Premise

I was initially reluctant on using generative AI for my writing process.

That being said, I was quite aware of the potential of large language models (generically addressed as LLMs in here henceforth) - especially true in the case of content creators and/or eccentrically curious individuals.

I, therefore, decided to clarify how I’ll be using generative AI for my ideation process.

The Promise

Before we get onto that, as promised by the title, distilling the over-arching skills needed to extract good insights from a conversation with an LLM (an el-el-em; please don’t read it as large, please..).