You're an app owner, meeting with a technical sales team, and get to the Main Questions:
Is it better to tune my application for throughput or response time?
Should I run fewer larger pods or many smaller pods?
Should I break my application apart into more pods?
Will we save money if we optimize our Kubernetes apps?
Isn’t more always faster and better?
And the answer comes back, "It depends." A qualified explanation follows, and you're stuck with an I.O.U. for answers that may never come.
In this article at TheNewStack, we discuss application tuning in Kubernetes environments.
Thanks to machine learning and the rapid feedback it provides, "It Depends" may be a thing of the past.