<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Devops on Adur</title><link>https://adurrr.github.io/en/tags/devops/</link><description>Recent content in Devops on Adur</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sun, 15 Jun 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://adurrr.github.io/en/tags/devops/index.xml" rel="self" type="application/rss+xml"/><item><title>LLMOps: integrating LLMs into DevOps workflows</title><link>https://adurrr.github.io/en/p/llmops-integrating-llms-into-devops-workflows/</link><pubDate>Sun, 15 Jun 2025 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/llmops-integrating-llms-into-devops-workflows/</guid><description>&lt;p&gt;LLMs have moved beyond chatbots. They&amp;rsquo;re now embedded in engineering workflows where they automate tedious tasks, speed incident response, and boost developer productivity. But deploying an LLM into a production DevOps pipeline is fundamentally different from using ChatGPT in a browser.&lt;/p&gt;
&lt;p&gt;This guide covers what LLMOps means in practice, where LLMs fit into DevOps, architecture patterns that work, and pitfalls to avoid.&lt;/p&gt;
&lt;h2 id="what-is-llmops"&gt;What is LLMOps?
&lt;/h2&gt;&lt;p&gt;LLMOps is the practices, tools, and infrastructure needed to operationalize LLMs. It extends MLOps but addresses challenges unique to language models:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Model selection vs. model training&lt;/strong&gt;: Most teams consume pre-trained models (via APIs or self-hosted inference) rather than training from scratch. The operational focus shifts to prompt engineering, fine-tuning, and retrieval-augmented generation (RAG).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost management&lt;/strong&gt;: LLM inference is expensive. Token-based pricing means costs scale with usage in ways that are harder to predict than traditional compute.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Non-determinism&lt;/strong&gt;: LLMs produce variable outputs for the same input, which complicates testing, validation, and reproducibility.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Latency&lt;/strong&gt;: Response times of seconds (not milliseconds) require different architectural patterns than traditional microservices.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;LLMOps is not a separate discipline. It is an extension of your existing DevOps and MLOps practices, adapted for the specific operational characteristics of language models.&lt;/p&gt;
&lt;h2 id="practical-use-cases-in-devops"&gt;Practical use cases in DevOps
&lt;/h2&gt;&lt;p&gt;Here is where LLMs are delivering real value in DevOps workflows today:&lt;/p&gt;
&lt;h3 id="automated-code-review"&gt;Automated code review
&lt;/h3&gt;&lt;p&gt;LLMs can provide a first-pass review of pull requests, catching common issues like missing error handling, security anti-patterns, inconsistent naming, or missing tests. They do not replace human reviewers but reduce the burden of repetitive feedback.&lt;/p&gt;
&lt;h3 id="incident-summarization"&gt;Incident summarization
&lt;/h3&gt;&lt;p&gt;When an incident fires at 3 AM, the on-call engineer needs context fast. An LLM can ingest alert data, recent deployment logs, related runbooks, and previous incident reports to produce a concise summary of what is likely going wrong and what was done last time.&lt;/p&gt;
&lt;h3 id="log-analysis"&gt;Log analysis
&lt;/h3&gt;&lt;p&gt;LLMs are surprisingly effective at pattern recognition in unstructured log data. Feed them a block of error logs and they can identify the root cause faster than manual grep sessions, especially for unfamiliar systems.&lt;/p&gt;
&lt;h3 id="documentation-generation"&gt;Documentation generation
&lt;/h3&gt;&lt;p&gt;Generating draft documentation from code, API schemas, or Terraform modules. The output needs human review, but it eliminates the blank-page problem and keeps docs closer to current state.&lt;/p&gt;
&lt;h3 id="infrastructure-as-code-generation"&gt;Infrastructure as Code generation
&lt;/h3&gt;&lt;p&gt;Given a natural language description of desired infrastructure, LLMs can generate Terraform, Ansible, or Kubernetes manifests as a starting point. Useful for scaffolding, not for production-ready code without review.&lt;/p&gt;
&lt;h2 id="architecture-patterns-for-llm-integration"&gt;Architecture patterns for LLM integration
&lt;/h2&gt;&lt;h3 id="pattern-1-api-gateway-to-external-llm"&gt;Pattern 1: API gateway to external LLM
&lt;/h3&gt;&lt;p&gt;The simplest approach. Your application calls an external LLM API (OpenAI, Anthropic, etc.) through a centralized gateway that handles authentication, rate limiting, logging, and cost tracking.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[CI/CD Pipeline] --&amp;gt; [API Gateway] --&amp;gt; [External LLM API]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [Logging &amp;amp; Metrics]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [Cost Tracking]
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;: No infrastructure to manage, access to the most capable models, fast to implement.
&lt;strong&gt;Cons&lt;/strong&gt;: Data leaves your network, vendor lock-in, variable latency, ongoing API costs.&lt;/p&gt;
&lt;h3 id="pattern-2-self-hosted-inference"&gt;Pattern 2: Self-hosted inference
&lt;/h3&gt;&lt;p&gt;Run open-weight models (Llama, Mistral, etc.) on your own infrastructure using inference servers like vLLM or Ollama.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[CI/CD Pipeline] --&amp;gt; [Load Balancer] --&amp;gt; [vLLM / Ollama Instance(s)]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [GPU Node Pool]
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;: Data stays internal, predictable costs at scale, no vendor dependency, full control over model versions.
&lt;strong&gt;Cons&lt;/strong&gt;: Requires GPU infrastructure, operational overhead, smaller models may be less capable.&lt;/p&gt;
&lt;h3 id="pattern-3-rag-enhanced-pipeline"&gt;Pattern 3: RAG-enhanced pipeline
&lt;/h3&gt;&lt;p&gt;Combine an LLM with a retrieval system that provides relevant context from your own knowledge base (runbooks, documentation, past incidents). This dramatically improves response quality for domain-specific tasks.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Query] --&amp;gt; [Embedding Model] --&amp;gt; [Vector DB Search] --&amp;gt; [Context + Query] --&amp;gt; [LLM] --&amp;gt; [Response]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; |
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [Your Knowledge Base]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; (runbooks, docs, etc.)
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This pattern is particularly powerful for incident response and documentation tasks where the LLM needs your organization&amp;rsquo;s specific context.&lt;/p&gt;
&lt;h2 id="key-considerations"&gt;Key considerations
&lt;/h2&gt;&lt;h3 id="cost"&gt;Cost
&lt;/h3&gt;&lt;p&gt;LLM API costs can be surprising. A code review pipeline that processes 50 PRs per day with large diffs can easily run hundreds of dollars per month. Strategies to control costs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set token limits per request&lt;/li&gt;
&lt;li&gt;Cache common queries and responses&lt;/li&gt;
&lt;li&gt;Use smaller models for simpler tasks (triage with a small model, escalate to a larger one)&lt;/li&gt;
&lt;li&gt;Monitor token usage per pipeline and set alerts&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="latency"&gt;Latency
&lt;/h3&gt;&lt;p&gt;LLM responses take seconds, not milliseconds. Design your integrations as asynchronous processes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Post code review comments after the fact, do not block the PR&lt;/li&gt;
&lt;li&gt;Process incident data in the background, push results to a Slack channel&lt;/li&gt;
&lt;li&gt;Use streaming responses where possible to improve perceived performance&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="hallucinations"&gt;Hallucinations
&lt;/h3&gt;&lt;p&gt;LLMs will confidently generate plausible-sounding but incorrect information. This is a critical concern for DevOps tasks where bad advice can cause outages.&lt;/p&gt;
&lt;p&gt;Mitigations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Always present LLM output as suggestions, never as authoritative actions&lt;/li&gt;
&lt;li&gt;Require human approval before any LLM-generated change is applied&lt;/li&gt;
&lt;li&gt;Use RAG to ground responses in verified documentation&lt;/li&gt;
&lt;li&gt;Implement output validation (e.g., lint generated IaC before presenting it)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="security"&gt;Security
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data exposure&lt;/strong&gt;: Anything you send to an external LLM API may be used for training or stored. Never send secrets, credentials, or sensitive customer data.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prompt injection&lt;/strong&gt;: Malicious content in code, logs, or user input can manipulate LLM behavior. Sanitize inputs and validate outputs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supply chain&lt;/strong&gt;: LLM-generated code may introduce vulnerabilities. Run all generated code through your existing security scanning pipeline.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="tools-and-platforms"&gt;Tools and platforms
&lt;/h2&gt;&lt;h3 id="langchain"&gt;LangChain
&lt;/h3&gt;&lt;p&gt;A framework for building LLM-powered applications. Useful for orchestrating multi-step chains (e.g., retrieve context, format prompt, call LLM, parse output). Supports many LLM providers and has good tooling for RAG pipelines.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;span class="lnt"&gt;8
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;langchain.prompts&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;&amp;#34;Review this code diff for security issues and suggest fixes:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="si"&gt;{diff}&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;gpt-4o&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;diff&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;code_diff&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id="vllm"&gt;vLLM
&lt;/h3&gt;&lt;p&gt;A high-throughput inference engine for self-hosted models. Supports PagedAttention for efficient memory management and continuous batching for high throughput.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Start a vLLM server&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;python -m vllm.entrypoints.openai.api_server &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --model mistralai/Mistral-7B-Instruct-v0.2 &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --port &lt;span class="m"&gt;8000&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Exposes an OpenAI-compatible API, so you can swap between self-hosted and external APIs with minimal code changes.&lt;/p&gt;
&lt;h3 id="ollama"&gt;Ollama
&lt;/h3&gt;&lt;p&gt;The easiest way to run LLMs locally for development and testing. Great for prototyping pipelines before committing to infrastructure.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Pull and run a model&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ollama pull llama3
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ollama run llama3 &lt;span class="s2"&gt;&amp;#34;Summarize this error log: [paste log]&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Serve as an API&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ollama serve
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Then call http://localhost:11434/api/generate&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="example-automated-pr-review-pipeline"&gt;Example: Automated PR review pipeline
&lt;/h2&gt;&lt;p&gt;Here is a conceptual pipeline for automated PR review using an LLM:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;span class="lnt"&gt;21
&lt;/span&gt;&lt;span class="lnt"&gt;22
&lt;/span&gt;&lt;span class="lnt"&gt;23
&lt;/span&gt;&lt;span class="lnt"&gt;24
&lt;/span&gt;&lt;span class="lnt"&gt;25
&lt;/span&gt;&lt;span class="lnt"&gt;26
&lt;/span&gt;&lt;span class="lnt"&gt;27
&lt;/span&gt;&lt;span class="lnt"&gt;28
&lt;/span&gt;&lt;span class="lnt"&gt;29
&lt;/span&gt;&lt;span class="lnt"&gt;30
&lt;/span&gt;&lt;span class="lnt"&gt;31
&lt;/span&gt;&lt;span class="lnt"&gt;32
&lt;/span&gt;&lt;span class="lnt"&gt;33
&lt;/span&gt;&lt;span class="lnt"&gt;34
&lt;/span&gt;&lt;span class="lnt"&gt;35
&lt;/span&gt;&lt;span class="lnt"&gt;36
&lt;/span&gt;&lt;span class="lnt"&gt;37
&lt;/span&gt;&lt;span class="lnt"&gt;38
&lt;/span&gt;&lt;span class="lnt"&gt;39
&lt;/span&gt;&lt;span class="lnt"&gt;40
&lt;/span&gt;&lt;span class="lnt"&gt;41
&lt;/span&gt;&lt;span class="lnt"&gt;42
&lt;/span&gt;&lt;span class="lnt"&gt;43
&lt;/span&gt;&lt;span class="lnt"&gt;44
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c"&gt;# .github/workflows/llm-review.yml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;LLM Code Review&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;pull_request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;types&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="l"&gt;opened, synchronize]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;llm-review&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;runs-on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ubuntu-latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Checkout&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;actions/checkout@v4&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;with&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;fetch-depth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Get diff&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;diff&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; git diff origin/${{ github.base_ref }}...HEAD &amp;gt; diff.txt&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Run LLM review&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;LLM_API_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;${{ secrets.LLM_API_KEY }}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; python scripts/llm_review.py \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; --diff diff.txt \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; --model gpt-4o \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; --max-tokens 2000 \
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; --output review.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Post review comments&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;actions/github-script@v7&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;with&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;script&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="sd"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; const review = require(&amp;#39;./review.json&amp;#39;);
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; await github.rest.pulls.createReview({
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; owner: context.repo.owner,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; repo: context.repo.repo,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; pull_number: context.issue.number,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; body: review.summary,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; event: &amp;#39;COMMENT&amp;#39;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; comments: review.line_comments
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sd"&gt; });&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The review script would:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Read the diff&lt;/li&gt;
&lt;li&gt;Split large diffs into chunks that fit within the model&amp;rsquo;s context window&lt;/li&gt;
&lt;li&gt;For each chunk, construct a prompt asking for security issues, bugs, and style problems&lt;/li&gt;
&lt;li&gt;Aggregate results and format as GitHub review comments&lt;/li&gt;
&lt;li&gt;Include confidence scores and always mark output as AI-generated&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="guardrails-and-responsible-use"&gt;Guardrails and responsible use
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Label all LLM output clearly&lt;/strong&gt; as AI-generated. Engineers should know when they are reading machine output.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Never auto-merge or auto-apply&lt;/strong&gt; LLM suggestions. Keep a human in the loop for all changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Log all prompts and responses&lt;/strong&gt; for debugging and audit purposes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Set spending limits&lt;/strong&gt; and alerts on LLM API usage.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Review prompt templates regularly&lt;/strong&gt; to ensure they do not leak sensitive information.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test for bias and errors&lt;/strong&gt; with representative samples before deploying to production workflows.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="getting-started-recommendations"&gt;Getting started recommendations
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Pick one use case&lt;/strong&gt; - Don&amp;rsquo;t try to LLM-enable everything at once. Start low-risk: documentation drafts, commit message suggestions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Start with an external API&lt;/strong&gt; - Don&amp;rsquo;t invest in GPU infrastructure until you&amp;rsquo;ve validated the use case. Use OpenAI or Anthropic to prototype.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Measure everything&lt;/strong&gt; - Track cost per invocation, latency, user satisfaction, error rates from day one.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Build an evaluation framework&lt;/strong&gt; - Create a test suite of known-good inputs and expected outputs. Run it against every prompt change or model update.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Plan your data strategy&lt;/strong&gt; - Decide early what data you&amp;rsquo;ll and won&amp;rsquo;t send to external APIs. Document clearly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Iterate on prompts&lt;/strong&gt; - Prompt engineering is iterative. Version control prompts, treat as code.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;LLMs are a powerful tool for DevOps automation, but they&amp;rsquo;re exactly that: a tool. They work best when thoughtfully integrated into existing workflows, with clear boundaries on what they can and cannot do autonomously.&lt;/p&gt;</description></item><item><title>Basic Kubernetes Commands</title><link>https://adurrr.github.io/en/p/basic-kubernetes-commands/</link><pubDate>Thu, 29 Oct 2020 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/basic-kubernetes-commands/</guid><description>&lt;p&gt;This post provides an overview of the basic &lt;code&gt;kubectl&lt;/code&gt; CLI commands that can be applied to Kubernetes objects. Some examples of Kubernetes objects include Pods, ReplicaSets, Deployments, Namespaces, etc.&lt;/p&gt;
&lt;h2 id="namespaces"&gt;Namespaces
&lt;/h2&gt;&lt;p&gt;&lt;code&gt;Namespaces&lt;/code&gt; are used in Kubernetes to organize cluster objects. Essentially, a &lt;code&gt;namespace&lt;/code&gt; &lt;strong&gt;represents a folder containing a set of objects&lt;/strong&gt;. By default, &lt;code&gt;kubectl&lt;/code&gt; interacts with the &lt;code&gt;default&lt;/code&gt; namespace. To use a different namespace, the &lt;code&gt;--namespace&lt;/code&gt; flag is required, for example &lt;code&gt;--namespace=example&lt;/code&gt;. To interact with all namespaces, use the &lt;code&gt;--all-namespaces&lt;/code&gt; flag&lt;cite&gt; &lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/cite&gt;&lt;/p&gt;
&lt;h3 id="contexts"&gt;Contexts
&lt;/h3&gt;&lt;p&gt;If you want to change the default namespace permanently, you can use a &lt;code&gt;context&lt;/code&gt;. When used, it is recorded in the kubectl configuration file, stored at &lt;code&gt;HOME/.kube/config&lt;/code&gt;. To create a context with a new default namespace name, run&lt;cite&gt; &lt;sup id="fnref1:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl config set-context my-context --namespace&lt;span class="o"&gt;=&lt;/span&gt;nuevonamespacepordefecto
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl config use-context my-context
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="kubernetes-api-objects"&gt;Kubernetes API objects
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Every Kubernetes object is represented by a RESTful resource and exists at a unique HTTP path in the Kubernetes API&lt;/strong&gt;. Resources are represented as &lt;strong&gt;JSON or YAML files&lt;/strong&gt;. Through the &lt;code&gt;kubectl&lt;/code&gt; command, you can access these objects. For example, using &lt;code&gt;kubectl get&lt;/code&gt; you can access any resource in the default namespace&lt;cite&gt; &lt;sup id="fnref2:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get &amp;lt;resource-name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;To get a more specific resource:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl get &amp;lt;resource-name&amp;gt; &amp;lt;object-name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;To get more information about the object in JSON or YAML format, you can add the &lt;code&gt;-o json&lt;/code&gt; or &lt;code&gt;-o yaml&lt;/code&gt; flags respectively. This output is not very human-readable.&lt;/p&gt;
&lt;p&gt;Another option to get human-readable details about an object is to use the &lt;code&gt;kubectl describe&lt;/code&gt; command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl describe &amp;lt;resource-name&amp;gt; &amp;lt;object-name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="creating-updating-or-deleting-kubernetes-objects"&gt;Creating, updating, or deleting Kubernetes objects
&lt;/h2&gt;&lt;p&gt;As mentioned earlier, Kubernetes objects or resources are represented by JSON or YAML files. To create, update, or delete these objects, such files are used. For example, to create or update an object stored in &lt;code&gt;ejemplo.yaml&lt;/code&gt;, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f ejemplo.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If you prefer to make interactive edits instead of modifying the local file, you can use the &lt;code&gt;kubectl edit&lt;/code&gt; command to download the latest version and launch an editor. After saving the file, it will be uploaded and automatically updated.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl edit &amp;lt;resource-name&amp;gt; &amp;lt;object-name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The &lt;code&gt;kubectl apply&lt;/code&gt; command also saves the version history of configuration files. You can access these records using the &lt;code&gt;edit-last-applied&lt;/code&gt;, &lt;code&gt;set-last-applied&lt;/code&gt;, and &lt;code&gt;view-last-applied&lt;/code&gt; options.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl apply -f myobj.yaml view-last-applied
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;To delete an object, simply run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl delete -f ejemplo.yaml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="debugging"&gt;Debugging
&lt;/h2&gt;&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; also has a set of commands for debugging your containers. To view the logs of a running container, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl logs &amp;lt;pod-name&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If there are multiple containers in the pod, you can choose the container to inspect with the &lt;code&gt;-c&lt;/code&gt; flag.&lt;/p&gt;
&lt;p&gt;By default, &lt;code&gt;kubectl logs&lt;/code&gt; lists the current logs and exits. If you want to continuously stream the logs to the terminal instead, you can add the &lt;code&gt;-f&lt;/code&gt; (follow) flag to the command line.&lt;/p&gt;
&lt;p&gt;You can also use the &lt;code&gt;exec&lt;/code&gt; command to run a command in a running container:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; -it &amp;lt;pod-name&amp;gt; -- bash
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This will provide an interactive console inside the running container for more detailed debugging.&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a class="link" href="https://www.oreilly.com/library/view/kubernetes-up-and/9781492046523/" target="_blank" rel="noopener"
 &gt;O&amp;rsquo;Reilly, Kubernetes: Up and Running&lt;/a&gt;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref2:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Basic Kubernetes Objects</title><link>https://adurrr.github.io/en/p/basic-kubernetes-objects/</link><pubDate>Thu, 29 Oct 2020 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/basic-kubernetes-objects/</guid><description>&lt;p&gt;This post provides a description of the basic Kubernetes objects. Some examples of Kubernetes objects include Pods, ReplicaSets, Deployments, Namespaces, etc.&lt;/p&gt;
&lt;h2 id="basic-objects"&gt;Basic objects
&lt;/h2&gt;&lt;p&gt;According to the &lt;a class="link" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" target="_blank" rel="noopener"
 &gt;Kubernetes documentation&lt;/a&gt;, Kubernetes &lt;strong&gt;objects&lt;/strong&gt; are &lt;strong&gt;persistent entities&lt;/strong&gt;. Kubernetes uses these entities to represent the &lt;strong&gt;state of the cluster&lt;/strong&gt;. Each Kubernetes object is represented by a RESTful resource and exists at a unique HTTP path. Specifically, &lt;strong&gt;objects can describe&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Which &lt;strong&gt;containerized applications are running (and on which nodes)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;resources available&lt;/strong&gt; to those applications.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;behavioral policies for those applications&lt;/strong&gt;, such as restart policies, upgrades, and fault tolerance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Almost all Kubernetes objects include two fields that configure them: the &lt;code&gt;spec&lt;/code&gt; or &lt;strong&gt;desired state of the object&lt;/strong&gt; specification, and the &lt;code&gt;status&lt;/code&gt; or &lt;strong&gt;actual/current state of the object&lt;/strong&gt;. In the &lt;strong&gt;&lt;code&gt;spec&lt;/code&gt;&lt;/strong&gt; section, the intent or &lt;strong&gt;desired state&lt;/strong&gt; of the object is declared. The &lt;strong&gt;control plane&lt;/strong&gt; is responsible for attempting to &lt;strong&gt;match the actual state of the object with the desired state&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To create an object, the Kubernetes API must receive the object information in JSON format. Most of the time, the information is sent through &lt;code&gt;kubectl&lt;/code&gt; in a &lt;code&gt;.yaml&lt;/code&gt; file that will be converted to JSON format. An example of a &lt;code&gt;.yaml&lt;/code&gt; file is as follows&lt;cite&gt; &lt;sup id="fnref1:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;apps/v1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# for versions before 1.9.0 use apps/v1beta2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;Deployment&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nginx-deployment&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;matchLabels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nginx&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;replicas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# tells deployment to run 2 pods matching the template&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nginx&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nginx&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;nginx:1.14.2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="nt"&gt;containerPort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id="required-fields"&gt;Required fields
&lt;/h3&gt;&lt;p&gt;The following mandatory fields must be set to create an object&lt;cite&gt; &lt;sup id="fnref2:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;apiVersion&lt;/code&gt;: which &lt;strong&gt;version of the Kubernetes API&lt;/strong&gt; is being used to create the object.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kind&lt;/code&gt;: what &lt;strong&gt;type of object&lt;/strong&gt; you want to create.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;metadata&lt;/code&gt;: data that serves to &lt;strong&gt;uniquely identify the object&lt;/strong&gt;, including a name, a UID, and an optional &lt;code&gt;namespace&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;spec&lt;/code&gt;: the &lt;strong&gt;desired state&lt;/strong&gt; for the object.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="labels"&gt;Labels
&lt;/h2&gt;&lt;p&gt;Labels are &lt;strong&gt;key-value pairs attached to Kubernetes objects&lt;/strong&gt;. They are used to &lt;strong&gt;organize and select subsets of objects&lt;/strong&gt; based on predefined requirements. Many objects can share the same label, so labels do not provide uniqueness to objects&lt;cite&gt; &lt;sup id="fnref:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/cite&gt;&lt;/p&gt;
&lt;p&gt;For example, to add the label &lt;code&gt;color=green&lt;/code&gt; to a Pod called plant, you can run the following&lt;cite&gt; &lt;sup id="fnref:3"&gt;&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref"&gt;3&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl label pods planta &lt;span class="nv"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;verde
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;The above command will not overwrite an existing label, so you need to use the &lt;code&gt;--overwrite&lt;/code&gt; flag. Or if you want to remove the color label, use the following command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;kubectl label pods bar color -
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id="label-selectors"&gt;Label selectors
&lt;/h3&gt;&lt;p&gt;Controllers use label selectors to select a subset of objects. Kubernetes supports two types of selectors&lt;cite&gt; &lt;sup id="fnref:4"&gt;&lt;a href="#fn:4" class="footnote-ref" role="doc-noteref"&gt;4&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;h4 id="equality-based-selectors"&gt;Equality-based selectors
&lt;/h4&gt;&lt;p&gt;Allow filtering objects based on label keys and values. Matching is achieved using the operators:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Equals &lt;code&gt;=&lt;/code&gt; or &lt;code&gt;==&lt;/code&gt; (there is no difference between the operators)&lt;/li&gt;
&lt;li&gt;Not equals &lt;code&gt;!=&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="set-based-selectors"&gt;Set-based selectors
&lt;/h4&gt;&lt;p&gt;Allow filtering objects based on a set of values. You can use &lt;code&gt;in&lt;/code&gt;, &lt;code&gt;notin&lt;/code&gt; operators for label values, and the &lt;code&gt;exists&lt;/code&gt; operator for label keys.&lt;/p&gt;
&lt;h2 id="object-types"&gt;Object types
&lt;/h2&gt;&lt;h3 id="pods"&gt;Pods
&lt;/h3&gt;&lt;p&gt;A pod is the smallest scheduling unit in Kubernetes. It is a logical collection of one or more containers that&lt;cite&gt; &lt;sup id="fnref1:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Are scheduled on the same host.&lt;/li&gt;
&lt;li&gt;Share the same &lt;code&gt;network namespace&lt;/code&gt;, and therefore share a single IP address assigned to the Pod.&lt;/li&gt;
&lt;li&gt;Have access to mount the same external storage (volumes).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pods are ephemeral in nature and do not have self-healing capabilities. That is why controllers are used to manage Pod replication, fault tolerance, self-healing, etc. Some of these controllers include Deployments, ReplicaSets, etc.&lt;/p&gt;
&lt;h3 id="replicasets"&gt;ReplicaSets
&lt;/h3&gt;&lt;h3 id="deployments"&gt;Deployments
&lt;/h3&gt;&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a class="link" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" target="_blank" rel="noopener"
 &gt;Kubernetes Documentation, Understanding Kubernetes Objects&lt;/a&gt;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref2:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;&lt;a class="link" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" target="_blank" rel="noopener"
 &gt;Kubernetes Documentation, Labels and Selectors&lt;/a&gt;&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:3"&gt;
&lt;p&gt;&lt;a class="link" href="https://www.oreilly.com/library/view/kubernetes-up-and/9781492046523/" target="_blank" rel="noopener"
 &gt;O&amp;rsquo;Reilly, Kubernetes: Up and Running&lt;/a&gt;&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:4"&gt;
&lt;p&gt;&lt;a class="link" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" target="_blank" rel="noopener"
 &gt;Kubernetes Documentation, Label selectors&lt;/a&gt;&amp;#160;&lt;a href="#fnref:4" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Getting Started with Minikube</title><link>https://adurrr.github.io/en/p/getting-started-with-minikube/</link><pubDate>Thu, 29 Oct 2020 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/getting-started-with-minikube/</guid><description>&lt;h2 id="local-kubernetes-installation"&gt;Local Kubernetes installation
&lt;/h2&gt;&lt;p&gt;There are several tools that can be used to deploy Kubernetes on one or many clusters. Some of them include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://kubernetes.io/docs/setup/learning-environment/minikube/" target="_blank" rel="noopener"
 &gt;Minikube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://kubernetes.io/docs/setup/learning-environment/kind/" target="_blank" rel="noopener"
 &gt;Kind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.docker.com/products/docker-desktop" target="_blank" rel="noopener"
 &gt;Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://microk8s.io/" target="_blank" rel="noopener"
 &gt;MicroK8s&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://k3s.io/" target="_blank" rel="noopener"
 &gt;K3S&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Minikube is the easiest&lt;/strong&gt; and preferred method for setting up Kubernetes locally. It is used to manage a &lt;strong&gt;single-node cluster&lt;/strong&gt;, although there is already an experimental feature that supports multi-node clusters.&lt;/p&gt;
&lt;h2 id="minikube"&gt;Minikube
&lt;/h2&gt;&lt;p&gt;The &lt;code&gt;minikube&lt;/code&gt; project is a &lt;strong&gt;local Kubernetes cluster implementation for Linux, macOS, and Windows&lt;/strong&gt;. Its goal is to be the best tool for local Kubernetes application development&lt;cite&gt; &lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/cite&gt;&lt;/p&gt;
&lt;p&gt;The first steps with minikube can be found in the &lt;a class="link" href="https://minikube.sigs.k8s.io/docs/start/" target="_blank" rel="noopener"
 &gt;official documentation&lt;/a&gt; and are as follows&lt;cite&gt; &lt;sup id="fnref:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;h2 id="requirements"&gt;Requirements
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;2 CPUs or more&lt;/li&gt;
&lt;li&gt;2GB of RAM&lt;/li&gt;
&lt;li&gt;20GB of disk space&lt;/li&gt;
&lt;li&gt;Internet connection&lt;/li&gt;
&lt;li&gt;Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="installing-minikube"&gt;Installing minikube
&lt;/h2&gt;&lt;p&gt;For Linux there are three options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Binary package:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; sudo install minikube-linux-amd64 /usr/local/bin/minikube
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;Debian package:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo dpkg -i minikube_latest_amd64.deb
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;RPM package:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo rpm -ivh minikube-latest.x86_64.rpm
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;To &lt;strong&gt;start&lt;/strong&gt; minikube, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube start
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;To &lt;strong&gt;stop&lt;/strong&gt; minikube safely, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;minikube stop
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h2 id="installing-kubernetes"&gt;Installing Kubernetes
&lt;/h2&gt;&lt;p&gt;Kubernetes can be installed locally on virtual machines or directly on the operating system. Tools such as &lt;code&gt;Ansible&lt;/code&gt; or &lt;code&gt;kubeadm&lt;/code&gt; can be used to &lt;strong&gt;automate the installation&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The CLI tool &lt;code&gt;kubectl&lt;/code&gt; can be used to &lt;strong&gt;manage, deploy, and configure the resources and applications of the Minikube cluster&lt;/strong&gt;, and can be installed with the following commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; sudo apt-get install -y apt-transport-https gnupg2 curl
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg &lt;span class="p"&gt;|&lt;/span&gt; sudo apt-key add -
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;deb https://apt.kubernetes.io/ kubernetes-xenial main&amp;#34;&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; sudo tee -a /etc/apt/sources.list.d/kubernetes.list
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get update
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get install -y kubectl
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;For details on &lt;strong&gt;kubectl&lt;/strong&gt; commands, you can refer to the &lt;a class="link" href="https://kubectl.docs.kubernetes.io/" target="_blank" rel="noopener"
 &gt;kubectl book&lt;/a&gt;, the &lt;a class="link" href="https://kubernetes.io/search/?q=kubectl" target="_blank" rel="noopener"
 &gt;official Kubernetes documentation&lt;/a&gt;, or its &lt;a class="link" href="https://github.com/kubernetes/kubectl" target="_blank" rel="noopener"
 &gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A common step after installation is to configure and enable &lt;strong&gt;kubectl&lt;/strong&gt; command autocompletion:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt install -y bash-completion
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;source&lt;/span&gt; /usr/share/bash-completion/bash-completion
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;source&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;kubectl completion bash&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;source &amp;lt;(kubectl completion bash)&amp;#39;&lt;/span&gt; &amp;gt;&amp;gt;~/.bashrc
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Other relevant packages to install include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubeadm&lt;/code&gt;: used for managing or automating the installation&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubelet&lt;/code&gt;: an agent that runs on each node and communicates with the control plane components&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubernetes-cni&lt;/code&gt;: allows configuring network elements&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;sudo apt-get install kubelet kubeadm kubernetes-cni
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a class="link" href="https://github.com/kubernetes/minikube" target="_blank" rel="noopener"
 &gt;GitHub, minikube&lt;/a&gt;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;&lt;a class="link" href="https://minikube.sigs.k8s.io/docs/start/" target="_blank" rel="noopener"
 &gt;Minikube, Getting Started&lt;/a&gt;&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Kubernetes Architecture Components</title><link>https://adurrr.github.io/en/p/kubernetes-architecture-components/</link><pubDate>Wed, 28 Oct 2020 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/kubernetes-architecture-components/</guid><description>&lt;p&gt;This post covers the components of the Kubernetes architecture. The main sources of information are the &lt;strong&gt;&lt;a class="link" href="https://www.edx.org/es/course/introduction-to-kubernetes" target="_blank" rel="noopener"
 &gt;Introduction to Kubernetes&lt;/a&gt;&lt;/strong&gt; course by &lt;strong&gt;The Linux Foundation&lt;/strong&gt; on edX, authored by Chris Pokorni and Neependra Khare, and the official &lt;a class="link" href="https://kubernetes.io/docs/home/" target="_blank" rel="noopener"
 &gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="components-of-the-kubernetes-architecture"&gt;Components of the Kubernetes architecture
&lt;/h2&gt;&lt;p&gt;A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. At a very high level of abstraction, Kubernetes has the following main components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One or more &lt;strong&gt;master nodes&lt;/strong&gt;, on the control plane side.&lt;/li&gt;
&lt;li&gt;One or more &lt;strong&gt;worker nodes&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following figure shows the &lt;strong&gt;architecture of the components of a Kubernetes cluster&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;!-- ![Figure 1: Kubernetes architecture components.](../img/arch-kubernetes.svg) --&gt;
&lt;p&gt;The &lt;strong&gt;master node&lt;/strong&gt; provides a runtime environment for the control plane responsible for &lt;strong&gt;managing the state of a Kubernetes cluster and is the brain behind all operations&lt;/strong&gt; within the cluster. The &lt;strong&gt;control plane components&lt;/strong&gt; are agents with &lt;strong&gt;very distinct roles in cluster management&lt;/strong&gt;. To &lt;strong&gt;communicate&lt;/strong&gt; with the Kubernetes cluster, users &lt;strong&gt;send requests to the control plane through&lt;/strong&gt; a &lt;strong&gt;command-line interface&lt;/strong&gt; (CLI) tool, a &lt;strong&gt;web user interface&lt;/strong&gt; dashboard, or an &lt;strong&gt;application programming interface&lt;/strong&gt; (API)&lt;cite&gt; &lt;sup id="fnref:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;p&gt;It is essential to keep the control plane running at all costs. &lt;strong&gt;Losing the control plane can cause downtime, resulting in service disruption&lt;/strong&gt; to clients, with a potential loss of business. To &lt;strong&gt;ensure fault tolerance of the control plane&lt;/strong&gt;, &lt;strong&gt;master node replicas&lt;/strong&gt; can be added to the cluster, configured in &lt;strong&gt;high availability mode&lt;/strong&gt;. While only one of the master nodes is dedicated to actively managing the cluster, the control plane components remain synchronized across the master node replicas. This type of configuration adds resilience to the cluster&amp;rsquo;s control plane, in case the active master node fails&lt;cite&gt; &lt;sup id="fnref1:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;p&gt;To preserve the state of the Kubernetes cluster, all &lt;strong&gt;cluster configuration data&lt;/strong&gt; is saved in &lt;code&gt;etcd&lt;/code&gt;. &lt;strong&gt;etcd&lt;/strong&gt; is a &lt;strong&gt;distributed key-value store that only holds data related to the cluster state, not client workload data&lt;/strong&gt;. &lt;strong&gt;etcd&lt;/strong&gt; can be &lt;strong&gt;configured&lt;/strong&gt; on the &lt;strong&gt;master node (stacked topology) or on its dedicated host (external topology)&lt;/strong&gt; to help reduce the chances of data store loss by decoupling it from the other control plane agents&lt;cite&gt; &lt;sup id="fnref2:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;p&gt;With the stacked etcd topology, high availability master node replicas also ensure the resilience of the etcd data store. However, that is not the case with the external etcd topology, where etcd hosts must be replicated separately for high availability, a configuration that introduces the need for additional hardware.&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;master node&lt;/strong&gt; runs the following &lt;strong&gt;control plane&lt;/strong&gt; components&lt;cite&gt; &lt;sup id="fnref1:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kube-apiserver&lt;/code&gt; or API server&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kube-scheduler&lt;/code&gt; or scheduler&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kube-controller-manager&lt;/code&gt; or controller manager&lt;/li&gt;
&lt;li&gt;&lt;code&gt;etcd&lt;/code&gt; or data store&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While a &lt;strong&gt;worker node&lt;/strong&gt; has the following components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Container Runtime&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubelet&lt;/code&gt; or node agent&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kube-proxy&lt;/code&gt; or proxy&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Addons&lt;/code&gt; for DNS, dashboard, cluster-level monitoring, and logging&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="master-node"&gt;Master node
&lt;/h2&gt;&lt;h3 id="kube-apiserver"&gt;kube-apiserver
&lt;/h3&gt;&lt;p&gt;All &lt;strong&gt;administrative tasks are coordinated&lt;/strong&gt; by &lt;code&gt;kube-apiserver&lt;/code&gt;, a central control plane component that runs on the master node. The &lt;strong&gt;API server receives RESTful requests from users, operators, and external agents, then validates and processes them&lt;/strong&gt;. During processing, the API server reads the current state of the Kubernetes cluster from the etcd data store, and after the execution of a call, the resulting state of the Kubernetes cluster is saved in the distributed key-value data store for persistence. The API server is the only control plane component that communicates with the etcd data store, both for reading and saving Kubernetes cluster state information, acting as an intermediary interface for any other control plane agent querying the cluster state.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;API server is highly configurable and customizable&lt;/strong&gt;. It can &lt;strong&gt;scale horizontally&lt;/strong&gt;, and it also &lt;strong&gt;supports adding custom secondary API servers&lt;/strong&gt;, a configuration that turns the primary API server into a proxy for all custom secondary API servers and routes all incoming RESTful calls to them based on custom-defined rules&lt;cite&gt; &lt;sup id="fnref3:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;h3 id="kube-scheduler"&gt;kube-scheduler
&lt;/h3&gt;&lt;p&gt;The role of the &lt;code&gt;kube-scheduler&lt;/code&gt; is to &lt;strong&gt;assign new workload objects&lt;/strong&gt;, such as pods, to nodes. During the scheduling process, decisions are made based on the current state of the Kubernetes cluster and the requirements of the new object. The &lt;strong&gt;scheduler obtains from the etcd data store&lt;/strong&gt;, through the API server, &lt;strong&gt;the resource usage data for each worker node&lt;/strong&gt; in the cluster. The scheduler &lt;strong&gt;also receives&lt;/strong&gt; from the API server &lt;strong&gt;the requirements of the new object that are part of its configuration data&lt;/strong&gt;. Requirements may include constraints set by users and operators, such as scheduling work on a node labeled with &lt;strong&gt;&lt;code&gt;disk == ssd&lt;/code&gt;&lt;/strong&gt; as a key-value pair. The scheduler also &lt;strong&gt;takes into account Quality of Service (QoS) requirements, data locality, affinity, anti-affinity, dependent data location, taints, cluster topology, etc&lt;/strong&gt;. Once all the cluster data is available, the scheduling algorithm filters the nodes with predicates to isolate potential candidate nodes, which are then scored with priorities to select the node that satisfies all the requirements for the new workload. The result of the decision process is communicated to the API server, which then delegates the workload deployment to other control plane agents.&lt;/p&gt;
&lt;p&gt;The scheduler is highly configurable and customizable through scheduling policies, plugins, and profiles. Additional custom schedulers are also supported. A scheduler &lt;strong&gt;is extremely important and complex in a multi-node&lt;/strong&gt; Kubernetes cluster&lt;cite&gt; &lt;sup id="fnref4:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;h3 id="kube-controller-manager"&gt;kube-controller-manager
&lt;/h3&gt;&lt;p&gt;A &lt;strong&gt;control plane&lt;/strong&gt; component that &lt;strong&gt;runs the controllers to regulate the state of the Kubernetes cluster&lt;/strong&gt;. Controllers are &lt;strong&gt;watch loops&lt;/strong&gt; that &lt;strong&gt;run continuously and compare the desired state&lt;/strong&gt; of the cluster (provided by the configuration data of objects) &lt;strong&gt;with its current state&lt;/strong&gt; (obtained from the etcd data store through the API server). In case of a discrepancy, corrective actions are taken in the cluster until its current state matches the desired state. It runs controllers responsible for acting &lt;strong&gt;when nodes become unavailable&lt;/strong&gt;, for &lt;strong&gt;ensuring the expected number of pods&lt;/strong&gt;, for &lt;strong&gt;creating endpoints&lt;/strong&gt;, &lt;strong&gt;service accounts, and API access tokens&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref5:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;. Logically, &lt;strong&gt;each controller is an independent process&lt;/strong&gt;, but to reduce complexity, &lt;strong&gt;they are all compiled into a single binary and run in a single process&lt;/strong&gt;. These controllers include&lt;cite&gt; &lt;sup id="fnref2:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Node controller&lt;/strong&gt;: responsible for detecting and responding when a node goes down&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replication controller&lt;/strong&gt;: responsible for maintaining the correct number of pods for each replication controller in the system&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Endpoints controller&lt;/strong&gt;: builds the Endpoints object, i.e., joins Services and Pods&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Service account and token controllers&lt;/strong&gt;: create default accounts and API access tokens for new Namespaces.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="etcd"&gt;etcd
&lt;/h3&gt;&lt;p&gt;A &lt;strong&gt;persistent, consistent, and distributed key-value data store used to store all Kubernetes cluster information&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref3:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;. &lt;strong&gt;New data is appended&lt;/strong&gt; to the data store, never replaced. &lt;strong&gt;Obsolete data is periodically compacted&lt;/strong&gt; to minimize the size of the data store.&lt;/p&gt;
&lt;p&gt;Of all the control plane components, &lt;strong&gt;only the API server can communicate with the etcd data store&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The etcd CLI management tool, &lt;strong&gt;etcdctl&lt;/strong&gt;, &lt;strong&gt;provides options for backups, snapshots, and restores&lt;/strong&gt;. These are especially useful for a single-instance etcd Kubernetes cluster, common in development and learning environments. However, in Staging and Production environments, it is extremely important to replicate data stores in high availability mode.&lt;/p&gt;
&lt;p&gt;Some Kubernetes cluster bootstrapping tools, such as &lt;code&gt;kubeadm&lt;/code&gt;, provision &lt;strong&gt;stacked etcd master nodes&lt;/strong&gt;, where the data store runs alongside the other control plane components on the same master node and shares resources with them&lt;cite&gt; &lt;sup id="fnref6:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;!-- ![Figure 2: Stacked etcd topology.](../img/kubeadm-ha-topology-stacked-etcd.svg) --&gt;
&lt;p&gt;For &lt;strong&gt;data store isolation&lt;/strong&gt; from the control plane components, the bootstrapping process can be configured for an &lt;strong&gt;external etcd topology&lt;/strong&gt;. The data store is deployed on a separate dedicated host from the control plane, &lt;strong&gt;thus reducing the chances of an etcd failure&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref7:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;!-- ![Figure 3: External etcd topology.](../img/kubeadm-ha-topology-external-etcd.svg) --&gt;
&lt;p&gt;Both &lt;strong&gt;stacked and external etcd topologies support&lt;/strong&gt; &lt;code&gt;high availability&lt;/code&gt; configurations. etcd is based on the &lt;strong&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Raft_%28algorithm%29" target="_blank" rel="noopener"
 &gt;Raft&lt;/a&gt; consensus protocol&lt;/strong&gt;, which &lt;strong&gt;allows&lt;/strong&gt; a &lt;strong&gt;set of machines to survive the failure of some of them&lt;/strong&gt;, including master node failures. At any given time, one of the nodes in the group will be the leader and the rest will be followers&lt;cite&gt; &lt;sup id="fnref8:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;!-- ![Figure 4: Leader and followers.](../img/Master_and_Followers.png) --&gt;
&lt;p&gt;&lt;strong&gt;etcd is written&lt;/strong&gt; in the &lt;strong&gt;Go&lt;/strong&gt; programming language. In Kubernetes, besides storing the cluster state, &lt;strong&gt;etcd is also used to store configuration details such as subnets, ConfigMaps, Secrets, etc&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="worker-node"&gt;Worker node
&lt;/h2&gt;&lt;p&gt;A &lt;strong&gt;worker node provides a runtime environment for client applications&lt;/strong&gt;. Although they are containerized microservices, these &lt;strong&gt;applications are encapsulated in pods&lt;/strong&gt;, controlled by the cluster&amp;rsquo;s control plane agents running on the master node. Pods are scheduled on worker nodes, where they find the necessary compute, memory, and storage resources to run, and networking to communicate with each other and the outside world. &lt;strong&gt;A pod is the smallest scheduling unit in Kubernetes. It is a logical collection of one or more containers scheduled together&lt;/strong&gt;, and the collection can be started, stopped, or rescheduled as a single unit of work.&lt;/p&gt;
&lt;p&gt;Additionally, in a multi-worker Kubernetes cluster, &lt;strong&gt;network traffic between client users and the containerized applications&lt;/strong&gt; deployed in Pods is &lt;strong&gt;handled directly by the worker nodes&lt;/strong&gt; and is not routed through the master node&lt;cite&gt; &lt;sup id="fnref9:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;h3 id="container-runtime"&gt;Container Runtime
&lt;/h3&gt;&lt;p&gt;Although Kubernetes is described as a &amp;ldquo;container orchestration engine&amp;rdquo;, &lt;strong&gt;it does not have the ability to handle containers directly&lt;/strong&gt;. To manage the lifecycle of a container, Kubernetes &lt;strong&gt;requires a container runtime&lt;/strong&gt; on the node where a Pod and its containers will be scheduled. Kubernetes supports many container runtimes&lt;cite&gt; &lt;sup id="fnref10:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a class="link" href="https://www.docker.com/" target="_blank" rel="noopener"
 &gt;Docker&lt;/a&gt;&lt;/strong&gt;: although it is a container platform that uses &lt;code&gt;containerd&lt;/code&gt; as its container runtime, it is the most popular option used with Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a class="link" href="https://cri-o.io/" target="_blank" rel="noopener"
 &gt;CRI-O&lt;/a&gt;&lt;/strong&gt;: a lightweight container runtime for Kubernetes that also supports Docker image registries&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a class="link" href="https://containerd.io/" target="_blank" rel="noopener"
 &gt;containerd&lt;/a&gt;&lt;/strong&gt;: a simple, portable container runtime that provides robustness&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a class="link" href="https://github.com/kubernetes/frakti#frakti" target="_blank" rel="noopener"
 &gt;frakti&lt;/a&gt;&lt;/strong&gt;: a hypervisor-based container runtime for Kubernetes&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="kubelet"&gt;kubelet
&lt;/h3&gt;&lt;p&gt;The &lt;code&gt;kubelet&lt;/code&gt; is an &lt;strong&gt;agent that runs on every node and communicates with the control plane components on the master node&lt;/strong&gt;. It receives pod definitions, primarily from the API server, and interacts with the container runtime on the node to &lt;strong&gt;run containers associated with the pod&lt;/strong&gt;. It also &lt;strong&gt;monitors the health and resources of the containers running in pods&lt;/strong&gt;. The kubelet agent takes a set of Pod specifications, called &lt;strong&gt;PodSpecs&lt;/strong&gt;, that have been created by Kubernetes and ensures that the containers described in them are running and healthy.&lt;/p&gt;
&lt;p&gt;The kubelet &lt;strong&gt;connects&lt;/strong&gt; to container runtimes &lt;strong&gt;through&lt;/strong&gt; a plugin based on the &lt;strong&gt;Container Runtime Interface (CRI)&lt;/strong&gt;. The CRI consists of protocol buffers, gRPC APIs, libraries, and additional specifications and tools that are currently under development. To connect to interchangeable container runtimes, kubelet uses a shim application that provides a clear abstraction layer between kubelet and the container runtime.&lt;/p&gt;
&lt;!-- ![Figure 5: Container Runtime Interface (CRI).](../img/CRI.png) --&gt;
&lt;p&gt;From &lt;a class="link" href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" target="_blank" rel="noopener"
 &gt;blog.kubernetes.io&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As shown above, the &lt;strong&gt;kubelet acting as a gRPC client connects to the CRI shim&lt;/strong&gt;, which in turn &lt;strong&gt;acts as a gRPC server&lt;/strong&gt; to perform container and image operations. The CRI implements two services: &lt;code&gt;ImageService&lt;/code&gt; and &lt;code&gt;RuntimeService&lt;/code&gt;. &lt;code&gt;ImageService&lt;/code&gt; is responsible for all image-related operations, while &lt;code&gt;RuntimeService&lt;/code&gt; is responsible for all pod and container-related operations&lt;cite&gt; &lt;sup id="fnref11:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;h3 id="kube-proxy"&gt;kube-proxy
&lt;/h3&gt;&lt;p&gt;&lt;code&gt;kube-proxy&lt;/code&gt; is the &lt;strong&gt;network agent that runs on every node, responsible for dynamic updates and maintenance of all network rules&lt;/strong&gt; on the node. It extracts Pod network details and forwards connection requests to Pods.&lt;/p&gt;
&lt;p&gt;The kube-proxy &lt;strong&gt;is responsible for TCP, UDP, and SCTP stream forwarding or round-robin forwarding across a set of pod backends&lt;/strong&gt;, and it &lt;strong&gt;implements forwarding rules defined by users&lt;/strong&gt; through Service API objects&lt;cite&gt; &lt;sup id="fnref12:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;h3 id="addons"&gt;Addons
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Addons are cluster features and functionalities not yet available in Kubernetes&lt;/strong&gt;, so they are implemented through third-party pods and services&lt;cite&gt; &lt;sup id="fnref13:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DNS&lt;/strong&gt;: the cluster DNS is a DNS server required to assign DNS records to Kubernetes objects and resources&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;: a general-purpose web-based user interface for cluster management&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;: collects cluster-level container metrics and stores them in a central data store&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Logging&lt;/strong&gt;: collects cluster-level container logs and stores them in a central log store for analysis.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="networking-challenges"&gt;Networking challenges
&lt;/h2&gt;&lt;p&gt;Decoupled microservices-based applications rely heavily on networking to mimic the tight coupling that was once available in the monolithic era. Networking, in general, is not the easiest to understand and implement. Kubernetes is no exception: as an orchestrator of containerized microservices, it must address several distinct networking challenges&lt;cite&gt; &lt;sup id="fnref14:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Container-to-container communication within pods&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pod-to-pod communication on the same node and across all cluster nodes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pod-to-Service communication within the same namespace and across cluster namespaces&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;External-to-Service communication so that clients can access applications&lt;/strong&gt; in a cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="container-to-container-within-pods"&gt;Container to container within pods
&lt;/h3&gt;&lt;p&gt;By leveraging the virtualization features of the underlying host OS kernel, a container runtime &lt;strong&gt;creates an isolated network space for each container&lt;/strong&gt; it starts. On Linux, this isolated network space is called a &lt;em&gt;&lt;strong&gt;network namespace&lt;/strong&gt;&lt;/em&gt;. A &lt;em&gt;&lt;strong&gt;network namespace&lt;/strong&gt;&lt;/em&gt; &lt;strong&gt;can be shared between containers or with the host operating system&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When a pod is started, the &lt;code&gt;Container Runtime&lt;/code&gt; initializes a special pause container with the sole purpose of creating a network namespace for the pod. All additional containers, created through user requests, running within the Pod will share the Pause container&amp;rsquo;s network namespace so they can all communicate with each other via localhost.&lt;/p&gt;
&lt;h3 id="pod-to-pod-across-nodes"&gt;Pod to pod across nodes
&lt;/h3&gt;&lt;p&gt;In a Kubernetes cluster, pods are scheduled on nodes in a nearly unpredictable manner. Regardless of their host node, &lt;strong&gt;pods are expected to be able to communicate with all other pods in the cluster&lt;/strong&gt;, all without the implementation of Network Address Translation (NAT). This is a fundamental requirement of any Kubernetes networking implementation.&lt;/p&gt;
&lt;p&gt;The Kubernetes networking model aims to reduce complexity and &lt;strong&gt;treats Pods as VMs on a network&lt;/strong&gt;, where each &lt;strong&gt;VM is equipped with a network interface&lt;/strong&gt;, so each Pod receives a unique IP address. This model is called &amp;ldquo;&lt;strong&gt;IP-per-Pod&lt;/strong&gt;&amp;rdquo; and ensures pod-to-pod communication, just as virtual machines can communicate with each other on the same network.&lt;/p&gt;
&lt;p&gt;However, let us not forget about containers. They share the Pod&amp;rsquo;s network namespace and must coordinate port assignments within the Pod just as applications would on a VM, while being able to communicate with each other on &lt;strong&gt;localhost&lt;/strong&gt; within the Pod. However, &lt;strong&gt;containers are integrated with the overall Kubernetes networking model through the use of Container Network Interface (CNI)&lt;/strong&gt;-compatible CNI plugins. &lt;strong&gt;CNI is a set of specifications and libraries that allow plugins to configure networking for containers&lt;/strong&gt;. While there are some core plugins, most CNI plugins are third-party Software-Defined Networking (SDN) solutions that implement the Kubernetes networking model. In addition to addressing the fundamental networking model requirement, some networking solutions offer support for network policies. Flannel, Weave, and Calico are just a few of the SDN solutions available for Kubernetes clusters.&lt;/p&gt;
&lt;h3 id="pod-to-the-outside-world"&gt;Pod to the outside world
&lt;/h3&gt;&lt;p&gt;A successfully deployed containerized application running in pods within a Kubernetes cluster may require accessibility from the outside world. Kubernetes enables external accessibility through &lt;code&gt;Services&lt;/code&gt;, &lt;strong&gt;complex encapsulations of routing rule definitions stored in&lt;/strong&gt; &lt;code&gt;iptables&lt;/code&gt; &lt;strong&gt;on cluster nodes and implemented by kube-proxy agents&lt;/strong&gt;. By exposing services to the external world with the help of &lt;strong&gt;kube-proxy&lt;/strong&gt;, applications become accessible from outside the cluster through a virtual IP address.&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a class="link" href="https://kubernetes.io/docs/concepts/overview/components/" target="_blank" rel="noopener"
 &gt;Documentation Kubernetes, Kubernetes Components&lt;/a&gt;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref2:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref3:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;&lt;a class="link" href="https://www.edx.org/es/course/introduction-to-kubernetes" target="_blank" rel="noopener"
 &gt;edX, Introduction to kubernetes - The Linux Foundation&lt;/a&gt;&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref2:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref3:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref4:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref5:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref6:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref7:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref8:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref9:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref10:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref11:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref12:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref13:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref14:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Introduction to Kubernetes</title><link>https://adurrr.github.io/en/p/introduction-to-kubernetes/</link><pubDate>Tue, 27 Oct 2020 00:00:00 +0000</pubDate><guid>https://adurrr.github.io/en/p/introduction-to-kubernetes/</guid><description>&lt;p&gt;This post covers the basic introductory concepts of Kubernetes. The main sources of information are the &lt;strong&gt;&lt;a class="link" href="https://www.edx.org/es/course/introduction-to-kubernetes" target="_blank" rel="noopener"
 &gt;Introduction to Kubernetes&lt;/a&gt;&lt;/strong&gt; course by &lt;strong&gt;The Linux Foundation&lt;/strong&gt; on edX, authored by Chris Pokorni and Neependra Khare, and the official &lt;a class="link" href="https://kubernetes.io/docs/home/" target="_blank" rel="noopener"
 &gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="what-is-kubernetes-and-why-is-it-used"&gt;What is Kubernetes and why is it used?
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Kubernetes or &amp;ldquo;K8s&amp;rdquo;&lt;/strong&gt; is a portable, extensible, open-source platform, licensed under &lt;a class="link" href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank" rel="noopener"
 &gt;Apache 2.0&lt;/a&gt;, for managing workloads and services. It is used for &lt;strong&gt;automating the deployment, scaling, and management of containerized applications&lt;/strong&gt; and was originally designed by Google and donated to the &lt;a class="link" href="https://www.cncf.io/" target="_blank" rel="noopener"
 &gt;Cloud Native Computing Foundation&lt;/a&gt; (part of the Linux Foundation). It supports different container runtime environments, including Docker &lt;cite&gt; &lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/cite&gt;&lt;/p&gt;
&lt;p&gt;Currently, there is a trend toward running processes in what is known as the &lt;strong&gt;&amp;ldquo;cloud&amp;rdquo;&lt;/strong&gt;. However, this was preceded by a model known as &lt;strong&gt;monolithic&lt;/strong&gt;, which relied on outdated software architecture principles, with large components written in legacy programming languages and the entire system deployed on expensive hardware that was costly to manage.&lt;/p&gt;
&lt;p&gt;Therefore, the current trend is to separate and simplify each software component to turn them into distributed components, described by their specific characteristics. This creates &lt;strong&gt;microservices&lt;/strong&gt; that can be coupled together and are easy to replace or relocate. The microservices architecture is aligned with the principles of &lt;strong&gt;Event-Driven Architecture (EDA) and Service-Oriented Architecture (SOA)&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Each microservice is developed in a modern programming language, selected as the most suitable for the type of service and its function. This offers great &lt;strong&gt;flexibility in combining microservices with specific hardware&lt;/strong&gt; when necessary, enabling deployments on &lt;strong&gt;low-cost commodity hardware&lt;/strong&gt;. Although the &lt;strong&gt;distributed nature&lt;/strong&gt; of microservices &lt;strong&gt;adds complexity&lt;/strong&gt; to the architecture, one of the greatest benefits of microservices is &lt;strong&gt;scalability&lt;/strong&gt;. With the overall application becoming &lt;strong&gt;modular, each microservice can be scaled individually&lt;/strong&gt;, either &lt;strong&gt;manually or automatically&lt;/strong&gt; through demand-based autoscaling. Additionally, there is practically &lt;strong&gt;no downtime or service disruption&lt;/strong&gt; for clients because updates are rolled out seamlessly, one service at a time, instead of having to recompile, rebuild, and restart an entire monolithic application &lt;cite&gt; &lt;sup id="fnref:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/cite&gt;&lt;/p&gt;
&lt;p&gt;In summary, these microservices are deployed in containers and &lt;strong&gt;Kubernetes is a container orchestrator&lt;/strong&gt;. Therefore, to understand what Kubernetes is, it is necessary to review the basic concepts of containers and container orchestrators.&lt;/p&gt;
&lt;h2 id="what-are-containers"&gt;What are containers?
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt; are &lt;strong&gt;OS-level virtual spaces&lt;/strong&gt; that bundle application code with associated libraries and configuration files, along with the dependencies needed for the application to run. They provide &lt;strong&gt;scalability and high performance&lt;/strong&gt; to applications on &lt;strong&gt;any infrastructure&lt;/strong&gt; of your choice. They are best suited for delivering microservices by providing &lt;strong&gt;portable and isolated virtual environments&lt;/strong&gt; so applications can run &lt;strong&gt;without interference from other running applications&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id="benefits-of-using-containers"&gt;Benefits of using containers
&lt;/h3&gt;&lt;p&gt;The benefits of using containers include&lt;cite&gt; &lt;sup id="fnref:3"&gt;&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref"&gt;3&lt;/a&gt;&lt;/sup&gt;:&lt;/cite&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Agile application creation and deployment&lt;/strong&gt;: Greater ease and efficiency in creating container images instead of virtual machines.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Continuous development, integration, and deployment&lt;/strong&gt;: Allows container images to be built and deployed frequently and reliably, facilitating rollbacks since the image is immutable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dev and Ops separation of concerns&lt;/strong&gt;: You can create container images at build time rather than at deployment time, decoupling the application from the infrastructure.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observability&lt;/strong&gt;: Surfaces not only OS-level information and metrics, but also application health and other signals.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Consistency across development, testing, and production environments&lt;/strong&gt;: The application runs the same on a laptop as it does in the cloud.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud and OS distribution portability&lt;/strong&gt;: Runs on Ubuntu, RHEL, CoreOS, your physical datacenter, Google Kubernetes Engine, and everything else.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Application-centric management&lt;/strong&gt;: Raises the level of abstraction from the OS and virtualized hardware to the application running on a system with logical resources.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Loosely coupled, distributed, elastic, liberated microservices&lt;/strong&gt;: Applications are broken into smaller, independent pieces that can be deployed and managed dynamically, rather than as a monolithic application running on a single high-capacity machine.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource isolation&lt;/strong&gt;: Makes application performance more predictable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource utilization&lt;/strong&gt;: Enables greater efficiency and density.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="what-are-container-orchestrators"&gt;What are container orchestrators?
&lt;/h2&gt;&lt;p&gt;In development environments, running containers on a single host for application development and testing can be a viable option. However, when &lt;strong&gt;migrating to quality assurance (QA) and production (Prod) environments&lt;/strong&gt;, it is no longer viable because applications and &lt;strong&gt;services must meet specific requirements&lt;/strong&gt;&lt;cite&gt; &lt;sup id="fnref1:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fault tolerance.&lt;/li&gt;
&lt;li&gt;On-demand scalability.&lt;/li&gt;
&lt;li&gt;Optimal resource usage.&lt;/li&gt;
&lt;li&gt;Auto-discovery to automatically discover and communicate between components.&lt;/li&gt;
&lt;li&gt;Accessibility from the outside world.&lt;/li&gt;
&lt;li&gt;Seamless security updates or patches with zero downtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Container orchestrators&lt;/strong&gt; are tools that &lt;strong&gt;group systems to form clusters where container deployment and management are automated at scale, meeting the requirements&lt;/strong&gt; listed above.&lt;/p&gt;
&lt;p&gt;There are several container orchestrator solutions, and some of the available ones are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Amazon Elastic Container Service (ECS).&lt;/li&gt;
&lt;li&gt;Azure Container Instances.&lt;/li&gt;
&lt;li&gt;Azure Service Fabric.&lt;/li&gt;
&lt;li&gt;Kubernetes.&lt;/li&gt;
&lt;li&gt;Marathon.&lt;/li&gt;
&lt;li&gt;Nomad.&lt;/li&gt;
&lt;li&gt;Docker Swarm.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While it is feasible to manage a few containers manually, &lt;strong&gt;orchestrators greatly simplify administration for operators&lt;/strong&gt;, especially when dealing with hundreds or thousands of containers. Most container orchestrators &lt;strong&gt;can perform&lt;/strong&gt; the following actions &lt;cite&gt; &lt;sup id="fnref2:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Group hosts&lt;/strong&gt; while creating a cluster.&lt;/li&gt;
&lt;li&gt;Schedule containers to &lt;strong&gt;run on cluster hosts based on resource availability&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Allow containers in a cluster to &lt;strong&gt;communicate with each other regardless of the host they are deployed on&lt;/strong&gt; in the cluster.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bind containers and storage resources&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Group sets of similar containers and bind them to &lt;strong&gt;load-balancing&lt;/strong&gt; constructs to simplify access to containerized applications, creating a &lt;strong&gt;level of abstraction between containers and the user&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Manage and optimize resource usage&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Allow the &lt;strong&gt;implementation of policies to secure access&lt;/strong&gt; to applications running inside containers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With all these configurable yet flexible features, container orchestrators are a great choice when it comes to managing containerized applications at scale.&lt;/p&gt;
&lt;h2 id="kubernetes-features"&gt;Kubernetes features
&lt;/h2&gt;&lt;p&gt;Kubernetes offers a very broad set of features for container orchestration. Its main features are&lt;cite&gt; &lt;sup id="fnref3:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/cite&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatic bin packing&lt;/strong&gt;: Kubernetes automatically schedules containers based on resource needs and constraints, to maximize utilization without sacrificing availability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Self-healing&lt;/strong&gt;: Kubernetes automatically replaces and reschedules containers from failed nodes. It kills and restarts containers that do not respond to health checks, based on existing rules or policies. It also prevents traffic from being routed to unresponsive containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Horizontal scaling&lt;/strong&gt;: With Kubernetes, applications scale manually or automatically based on CPU usage or custom metrics.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Service discovery and load balancing&lt;/strong&gt;: Containers receive their own IP addresses from Kubernetes, while Kubernetes assigns a single Domain Name System (DNS) name to a set of containers to help load balance requests across all containers in the set.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automated rollouts and rollbacks&lt;/strong&gt;: Kubernetes seamlessly rolls out and rolls back application updates and configuration changes, constantly monitoring application health to prevent any downtime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Secret and configuration management&lt;/strong&gt;: Kubernetes manages sensitive data and application configuration details separately from the container image, to avoid rebuilding the respective image. Secrets consist of sensitive or confidential information passed to the application without revealing the sensitive content to the code configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Storage orchestration&lt;/strong&gt;: Kubernetes automatically mounts Software-Defined Storage (SDS) solutions to containers from local storage, external cloud providers, distributed storage, or network storage systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Batch execution&lt;/strong&gt;: Kubernetes supports batch execution, long-running jobs, and replaces failed containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;architecture of Kubernetes is modular and pluggable&lt;/strong&gt;. It not only orchestrates microservice-type applications as decoupled modules, but its own architecture also follows decoupled microservice patterns. Kubernetes functionality can be extended by writing custom resources, operators, custom APIs, scheduling rules, or plugins.&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;&lt;a class="link" href="https://es.wikipedia.org/wiki/Kubernetes" target="_blank" rel="noopener"
 &gt;Wikipedia, Kubernetes&lt;/a&gt;&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;&lt;a class="link" href="https://www.edx.org/es/course/introduction-to-kubernetes" target="_blank" rel="noopener"
 &gt;edX, Introduction to kubernetes - The Linux Foundation&lt;/a&gt;.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref2:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref3:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:3"&gt;
&lt;p&gt;&lt;a class="link" href="https://kubernetes.io/es/docs/concepts/overview/what-is-kubernetes/" target="_blank" rel="noopener"
 &gt;Kubernetes Documentation, What is Kubernetes?&lt;/a&gt;&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item></channel></rss>