Alright, my buddies, I’m again with one other put up primarily based on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In at present’s put up, I need to share the primary (of what’s going to seemingly be many) “nerd knobs” that I believe all of us ought to pay attention to and the way they may affect our use of AI and AI instruments. I can already sense the thrill within the room. In spite of everything, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. Nice-tuning our AI instruments to assist us be more practical.
First up, the requisite disclaimer or two.
- There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, in the event you all like this sort of weblog put up, I’d be completely satisfied to return in different posts the place we take a look at different “knobs” and settings in AI and the way they work. Properly, I’d be completely satisfied to return as soon as I perceive them, no less than. 🙂
- Altering any of the settings in your AI instruments can have dramatic results on outcomes. This consists of rising the useful resource consumption of the AI mannequin, in addition to rising hallucinations and lowering the accuracy of the knowledge that comes again out of your prompts. Take into account yourselves warned. As with all issues AI, go forth and discover and experiment. However achieve this in a secure, lab surroundings.
For at present’s experiment, I’m as soon as once more utilizing LMStudio operating regionally on my laptop computer fairly than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, try my final weblog, Making a NetAI Playground for Agentic AI Experimentation.
Sufficient of the setup, let’s get into it!
The affect of working reminiscence dimension, a.okay.a. “context”
Let me set a scene for you.
You’re in the course of troubleshooting a community situation. Somebody reported, or seen, instability at a degree in your community, and also you’ve been assigned the joyful job of attending to the underside of it. You captured some logs and related debug info, and the time has come to undergo all of it to determine what it means. However you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you already know all of the issues which are occurring proper now.
So, you resolve to see if AI can assist you’re employed by the information quicker and get to the foundation of the problem.
You hearth up your native AI assistant. (Sure, native—as a result of who is aware of what’s within the debug messages? Finest to maintain all of it secure in your laptop computer.)
You inform it what you’re as much as, and paste within the log messages.


After getting 120 or so traces of logs into the chat, you hit enter, kick up your toes, attain in your Arnold Palmer for a refreshing drink, and look forward to the AI magic to occur. However earlier than you possibly can take a sip of that iced tea and lemonade goodness, you see this has instantly popped up on the display:


Oh my.
“The AI has nothing to say.”!?! How might that be?
Did you discover a query so troublesome that AI can’t deal with it?
No, that’s not the issue. Try the useful error message that LMStudio has kicked again:
“Making an attempt to maintain the primary 4994 tokens when context the overflows. Nonetheless, the mannequin is loaded with context size of solely 4096 tokens, which isn’t sufficient. Attempt to load the mannequin with a bigger context size, or present shorter enter.”
And we’ve gotten to the foundation of this completely scripted storyline and demonstration. Each AI device on the market has a restrict to how a lot “working reminiscence” it has. The technical time period for this working reminiscence is “context size.” Should you attempt to ship extra information to an AI device than can match into the context size, you’ll hit this error, or one thing prefer it.
The error message signifies that the mannequin was “loaded with context size of solely 4096 tokens.” What’s a “token,” you surprise? Answering that may very well be a subject of a wholly completely different weblog put up, however for now, simply know that “tokens” are the unit of dimension for the context size. And the very first thing that’s carried out while you ship a immediate to an AI device is that the immediate is transformed into “tokens”.
So what can we do? Properly, the message provides us two potential choices: we will improve the context size of the mannequin, or we will present shorter enter. Generally it isn’t a giant deal to offer shorter enter. However different occasions, like after we are coping with giant log information, that choice isn’t sensible—the entire information is essential.
Time to show the knob!
It’s that first choice, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.
From inside LMStudio, head over to “My Fashions” and click on to open up the configuration settings interface for the mannequin.


You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are numerous them.


However the one we care about proper now’s the Context Size. We are able to see that the default size for this mannequin is 4096 tokens. Nevertheless it helps as much as 8192 tokens. Let’s max it out!


LMStudio supplies a useful warning and possible cause for why the mannequin doesn’t default to the max. The context size takes reminiscence and assets. And elevating it to “a excessive worth” can affect efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I exploit typically has that prime of a max), you may not need to simply max it out immediately. As a substitute, improve it by just a little at a time to search out the candy spot: a context size large enough for the job, however not outsized.
As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. That is proper up our alley!
When you’ve up to date your context size, you’ll have to “Eject” and “Reload” the mannequin for the setting to take impact. However as soon as that’s carried out, it’s time to make the most of the change we’ve made!


And take a look at that, with the bigger context window, the AI assistant was in a position to undergo the logs and provides us a pleasant write-up about what they present.
I significantly just like the shade it threw my manner: “…think about in search of help from … a certified community engineer.” Properly performed, AI. Properly performed.
However bruised ego apart, we will proceed the AI assisted troubleshooting with one thing like this.


And we’re off to the races. We’ve been in a position to leverage our AI assistant to:
- Course of a major quantity of log and debug information to determine potential points
- Develop a timeline of the issue (that will probably be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
- Establish some subsequent steps we will do in our troubleshooting efforts.
All tales should finish…
And so you have got it, our first AI Nerd Knob—Context Size. Let’s evaluate what we discovered:
- AI fashions have a “working reminiscence” that’s known as “context size.”
- Context Size is measured in “tokens.”
- Oftentimes occasions an AI mannequin will help a better context size than the default setting.
- Growing the context size would require extra assets, so make modifications slowly, don’t simply max it out utterly.
Now, relying on what AI device you’re utilizing, you might NOT have the ability to alter the context size. Should you’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will rely on the subscription and fashions you have got entry to. Nonetheless, there most positively IS a context size that can issue into how a lot “working reminiscence” the AI device has. And being conscious of that reality, and its affect on how you should utilize AI, is essential. Even when the knob in query is behind a lock and key. 🙂
Should you loved this look below the hood of AI and wish to study extra choices, please let me know within the feedback: Do you have got a favourite “knob” you want to show? Share it with all of us. Till subsequent time!
PS… Should you’d wish to be taught extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively referred to as Run Your Personal LLM Domestically For Free and with Ease that may get you began in a short time. Test it out!
Join Cisco U. | Be part of the Cisco Studying Community at present at no cost.
Study with Cisco
X | Threads | Fb | LinkedIn | Instagram | YouTube
Use #CiscoU and #CiscoCert to hitch the dialog.
Learn subsequent:
Share: