Samuel Bartels

  • Home
  • Samuel Bartels

Samuel Bartels Multi-Cloud DevOps Engineer | Software Developer | Helping you transition into cloud engineering

29/09/2022

When it comes to designing CI/CD pipeline, these are some of the rules i always follow or lay down:

1. Performing static analysis should be the first step in the CI pipeline. Static analysis should be performed to highlight possible coding errors, and make sure formatting of codebase is followed. Tools like checkstyle. findbugs, jslint, jshint, pmd etc, could be used to perform static analysis.

2. CI pipeline should be run with tests written and implemented in Test driven development(TDD).

3. When the pipeline fails, fixing the problem should have the highest priority than other tasks. This is because if action is postponed, next ex*****ons will fail as well.

One important thing  to discuss is the order of instructions in a Dockerfile. On one hand, it needs to be logical. When ...
26/09/2022

One important thing to discuss is the order of instructions in a Dockerfile. On one hand, it needs to be logical. When a docker build command is run, Docker will go instruction by instruction and check whether some other build process already created the image. Once an instruction that will build a new image is found, Docker will build not only that instruction but all those that follow. That means that, in most cases, COPY and ADD instructions should be placed near the bottom of the Dockerfile. Even within a group of COPY and ADD instructions, we should make sure to place higher those files that are less likely to change.

In my example, I'm adding run.sh before the JAR file and front-end components since later are likely to change with every build. If I execute the docker build command the second time you’ll notice that Docker outputs ---> Using cache in all steps.

Later on, when I change the source code, Docker will continue outputting ---> Using cache only until it gets to one of the last two COPY instructions (which one it will be, depends on whether I changed the JAR file or the front-end components).

The development environment is often the first thing newcomers to a software project need to face.  While each project i...
25/09/2022

The development environment is often the first thing newcomers to a software project need to face. While each project is different, it is not uncommon for them to spend a whole day setting up the environment, and many more days trying to understand how the application works.

How much time it takes to, for example, install JDK, set up a local instance of JBoss server, do all the configuration, and all other, often complicated, things required for the back-end part of the
application. On top of that, add the time to do the same for the front-end when it is separated, from the back-end. How much time does it take to, for example, understand the inner workings of some monolithic application that has thousands, tens of thousands, or even millions of lines of code split into layers upon layers of what was initially thought as a good idea but with time ended up as something that adds more complexity than benefits?

Development environment setup and simplicity are some of the areas where containers and microservices can help a lot. Microservices are, by definition, small. How much time does it take
to understand a thousand (or less) lines of code? Even if you never programmed in the language used in the microservice in front of you, it should not take a lot of time to understand what it does.
Containers, on the other hand, especially when combined with Vagrant, can make the development environment setup feel like a breeze. Not only that the setup process can be painless and fast, but the result can be as close as one can get to the production environment.

Vagrant is a command-line tool for creating and managing virtual machines through a hypervisor like VirtualBox or VMWare. Vagrant isn’t a hypervisor, just a driver that provides a consistent
interface. With a single Vagrantfile, we can specify everything Vagrant needs to know to create, through VirtualBox or VMWare, as many VMs as needed. Since all it needs is a single configuration
file, it can be kept in the repository together with the application code. It is very lightweight and portable and allows us to create reproducible environments no matter the underlying OS. While
containers make the usage of VMs partly obsolete, Vagrant shines when we need a development environment.

I started by specifying the box to be ubuntu/trusty64. The box is (a kind of) a VM on top of which we can add things we require. After the box, I set up the current directory (.) to be synced with the /vagrant directory inside the VM. This way, all the files from the current directory will be freely available within the virtual machine.
I specified that the VM should have 2 GB of RAM and defined one VM called dev. Inside the definition of the dev VM, I set up an IP that Vagrant will expose and that it should run an Ansible playbook dev.yml.

25/09/2022

Continuous deployment, microservices, and containers are a match made in heaven. They are like the three musketeers, each capable of great deeds but when joined, capable of so much more.

My simple explanation on how continuous integration(CI) works. ---------------------------------------------------------...
25/09/2022

My simple explanation on how continuous integration(CI) works.
-------------------------------------------------------------------------------------

Assuming you work in a software team of say 4 members and you're working on a project, developers 1 to 4 will commit their codes to the feature branches in a code repository like GitHub, GitLab, or bitbucket. CI tools will monitor the code repository and when change is detected, it runs the CI pipeline steps.

The pipeline runs different sets of tasks with the goal to verify the code works as expected. The end result is a partial trust that might require manual testing.

If any of the steps fails, the process is aborted and a failure notification is sent to the developer that made the commit and all other interested parties.

If the whole pipeline is run without any failure, the commit is promoted to a release candidate that requires additional manual verifications.

Hopefully, the diagram helps you understand it better.

25/09/2022

One thing I discovered during the building of CI pipelines is, when the pipeline fails, fixing the problem has a higher priority than any other task. If this action is postponed, the next
ex*****ons of the pipeline will fail as well. When you ignore the failure notifications, slowly, the CI process will begin losing its purpose. The sooner we fix the problem discovered during the
ex*****on of the CI pipeline, the better we are. If corrective action is taken immediately, knowledge, about the potential cause of the problem is still fresh (after all, it’s been only a few minutes between
the commit and the failure notification), and fixing it should be trivial.
So how does it work? Details depend on tools, programming language, projects, and many other
factors. The most common flow is the following.
• Pushing to the code repository
• Static analysis
• Pre-deployment testing
• Packaging and deployment to the test environment
• Post-deployment testing

25/09/2022

Trying to fit a monolithic application developed by many people throughout the years, without tests, with tight coupling and outdated technology is like an attempt to make an eighty-year-old lady look young again.

24/09/2022

Throughout all the years I’ve been working in the software industry, there is no single tool,
framework or practice that I admired more than continuous integration (CI) and, later on, continuous delivery (CD). The real meaning of that statement hides behind the scope of what CI/CD envelops.

In the beginning, I thought that CI/CD means that I knew Jenkins and was able to write scripts.
As time passed and I got more and more involved and learned that CI/CD relates to almost
every aspect of software development. That knowledge came at a cost. I failed (more than once) to create a successful CI pipeline with applications I worked with at the time. Even though others considered the result a success, now I know that it was a failure because the approach I took was wrong.

CI/CD cannot be done without making architectural decisions. Similar can be said for tests,
configurations, environments, fail-over, and so on. To create a successful implementation of CI/CD, we need to make a lot of changes that, on the first look, do not seem to be directly related. We need to apply some patterns and practices from the very beginning. We have to think about architecture, testing, coupling, packaging, fault tolerance, and many other things. CI/CD requires us to influence almost every aspect of software development. That diversity is what made me fall in love with it. By practicing CI/CD we are influencing and improving almost every aspect of the software development
life cycle.

24/09/2022

Throughout all the years I’ve been working in the software industry, there is no single tool,
framework or practice that I admired more than continuous integration (CI) and, later on, continuous delivery (CD). The real meaning of that statement hides behind the scope of what CI/CD envelops.

In the beginning, I thought that CI/CD means that I knew Jenkins and was able to write scripts.
As the time passed and I got more and more involved and learned that CI/CD relates to almost
every aspect of software development. That knowledge came at a cost. I failed (more than once) to create a successful CI pipeline with applications I worked with at the time. Even though others considered the result a success, now I know that it was a failure because the approach I took was wrong.

CI/CD cannot be done without making architectural decisions. Similar can be said for tests,
configurations, environments, fail-over, and so on. To create a successful implementation of CI/CD, we need to make a lot of changes that, on the first look, do not seem to be directly related. We need to apply some patterns and practices from the very beginning. We have to think about architecture, testing, coupling, packaging, fault tolerance, and many other things. CI/CD requires us to influence almost every aspect of software development. That diversity is what made me fall in love with it. By practicing CI/CD we are influencing and improving almost every aspect of the software development
life cycle.

I get lots of these opportunities in my mail every day. I will start sharing them here, hopefully, someone that follows ...
24/09/2022

I get lots of these opportunities in my mail every day. I will start sharing them here, hopefully, someone that follows me and needs it will grab it.
For this particular opportunity, the employer's name is there. Please visit their website and go to their career page to apply. Best of luck.

24/09/2022

Kubernetes metrics server
--------------------------------------------------------------

what is Metrics Server?
A simple explanation is that it collects information about used
resources (memory and CPU) of nodes and Pods. It does not store metrics, so do not think that you can use it to retrieve historical values and predict tendencies.

Metrics Server's goal is to provide an API that can be used to retrieve current resource usage. We can use that API through kubectl or by sending direct requests with, let's say, curl. In other words, Metrics Server collects cluster-wide metrics and allows us to retrieve them through its API.

23/09/2022

Whenever I want to deploy an application to the cloud, I only think of four things;
1. Containerization
2. Applications' high availability
3. Application monitoring
4. Deployment strategy

23/09/2022

It takes more than just using a provisioning tool to start up your cluster to create a durable and dependable Kubernetes architecture. A series of architecture decisions and their ex*****on make up a strong infrastructure design.
The fundamental guidelines that guide my decision-making throughout the Kubernetes infrastructure design process are listed below in brief.
1. Go managed: Although managed services could look pricier than self-hosted ones, I still prefer them over self-hosted ones. I prefer to use AWS EKS and other infrastructure services like databases, object stores, caches, and many others.
2. Everything-as-code: I use declarative infrastructure as code (IaC) and configuration as code (CaC) tools over imperative counterparts.
3. Automation: I prefer to automate everything as it is more efficient and easier to manage and scale. When it comes to Kubernetes, I always use GitOps.
4. Source of truth: I use source code control systems such as Git and GitHub to store and version my infrastructure as code.
5. Cloud-agnostic: Being cloud-agnostic means that you can run your workloads on any cloud with minimal vendor-lock. I use Docker, Terraform, and Kubernetes to achieve this.
6. Plan of failures: When I'm designing a Kubernetes cluster. I design it to survive outages and failures by using high-availability principles. I achieve this through applying chaos engineering ideas, disaster recovery solutions, infrastructure testing,
and infrastructure CI/CD
7. Operational efficiency: When designing a Kubernetes cluster, I take into consideration how to prepare for outages, cluster upgrades, backups, performance tuning, resource utilization, and cost control as without these, it could create bottlenecks.

23/09/2022

Having been using Kubernetes in production, these were the challenges i encountered in my initial stage
---------------------------------------------------------------------------------
Although Kubernetes may be simple to install, it is difficult to run and maintain. When used in production, Kubernetes presents a variety of difficulties and concerns, including scaling, uptime, security, resilience, observability, resource consumption, and cost control.
Container management and orchestration have been solved by Kubernetes, and it has produced a standard overlay over the compute services. However, key fundamental services, like Identity and Access Management (IAM), storage, and image registries, are still not properly or entirely supported by Kubernetes.
Typically, a Kubernetes cluster is a part of a larger company's production infrastructure, together with databases, IAM, LDAP, messaging, streaming, and other things. Connecting a Kubernetes cluster to these external infrastructure components is necessary to put it in production.
The on-premises infrastructure and services are expected to be managed and integrated by Kubernetes even during cloud transformation initiatives, which increases the level of production complexity.
Another difficulty arises when companies implement Kubernetes with the assumption that it would fix the scaling and uptime difficulties that their apps experience, but they typically do not plan for day-2 challenges. Security, scaling, uptime, resource use, cluster migrations, upgrades, and performance tuning all suffer as a result, with disastrous results. There are management problems in addition to technical ones, particularly when using Kubernetes in large organizations with numerous teams and when the firm is not well-prepared to have the appropriate team structure to run and maintain its Kubernetes infrastructure. Teams may find it difficult to coordinate around best practices, delivery methodologies, and common tools as a result.

Linking docker containers----------------------------------Linking one container to another is a simple process involvin...
22/09/2022

Linking docker containers
----------------------------------
Linking one container to another is a simple process involving container names.

There's a lot going on in this command, so let's break it down. Firstly, we're exposing port 4567 using the -p flag so we can access our web application externally.

We've also named our container webapp using the --name flag and mounted our web application as a volume using the -v flag.

This time, however, we've used a new flag called --link. The --link flag creates a parent-child link between two containers. The flag takes two arguments: the container name to link and an alias for the link. In this case, we're creating a child relationship with the redis container with an alias of db.

The alias allows us to consistently access the exposed information without needing to be concerned about the underlying container name. The link gives the parent container the ability to communicate with the child container and shares some connection details with it to help you configure applications to make use of the link.

We also get a security-related benefit from this linkage. You'll note that we didn't expose the Redis port with the -p flag. We don't need to. By linking the containers together, we're allowing the parent container to communicate to any open ports on the child container (i.e., our parent webapp container can connect to port 6379 on our child redis container).

But even better, no other container can connect to this port. Given that the port is not exposed to the local host, we now have a very strong security model for limiting the attack surface and network exposure of a containerized application.

Docker has a facet;  internal networking.Every Docker container is assigned an IP address, provided through an interface...
21/09/2022

Docker has a facet; internal networking.
Every Docker container is assigned an IP address, provided through an interface created when we installed Docker. That interface is called docker0. Let's look at that interface on our Docker host now.

You can see that the docker0 interface has an RFC1918 private IP address in the 172.16-172.30 range. This address, 172.17.42.1, will be the gateway address for the Docker network and all our Docker containers.

21/09/2022

Are you interested in any of these topics?🔥
- AWS, Azure, GCP ☁️
- Cloud Certifications 🏆
- DevOps / IaC 🛠️
- Docker / Kubernetes 🐳
- Security 🔑
- Linux 🐧
I'm sharing my knowledge daily, so follow me if you haven't already ✅

An open-source engine called Docker simplifies the installation of the software inside containers. It was created by the...
21/09/2022

An open-source engine called Docker simplifies the installation of the software inside containers. It was created by the staff at Docker, Inc. (previously known as dotCloud Inc., a pioneer in the Platform-as-a-Service (PAAS) market), and it was made available by them under the terms of the Apache 2.0 license. It is meant to provide a lightweight and fast environment in which to run your code as well as an efficient path to get that code from your laptop to your test environment and finally into production.

When a container is launched from an image, the ENTRYPOINT and/or CMD are executed from the working directory that is specified by the WORKDIR instruction.
The working directory for a set of instructions or the ultimate container can be configured using it. To set the working directory for a certain instruction, for instance.

Here I changed into the /opt/webapp/db directory to run bundle install and then changed into the /opt/webapp directory prior to specifying our ENTRYPOINT instruction of rackup.

In this project, i deployed a php application to AWS using terraform. I created a custom VPC. I then created two private...
20/09/2022

In this project, i deployed a php application to AWS using terraform.
I created a custom VPC. I then created two private subnets and one public subnets. I deployed the app server on the public subnet and had the database instance server deployed in the private server. I configured internet gateway to allow the VPC access to the internet.
Please find attached the architecture used for this project. Also link to script hosted on terraform.
https://github.com/samuelbartels20/lamp_stack_terraform

Building out this simulated on-prem data center in the cloud is one of the best mini projects i have done. I have really...
17/09/2022

Building out this simulated on-prem data center in the cloud is one of the best mini projects i have done. I have really learnt so much from security to multi infrastructure connection. Wheww!
Securing the data center was my best moment on the project.

In this project, I built a complete infrastructure from the ground up using Terraform. This infrastructure has 4 VPCs wi...
15/09/2022

In this project, I built a complete infrastructure from the ground up using Terraform. This infrastructure has 4 VPCs with its associated subnets. One of the VPCs serves as a simulated on-premise data center. The other 3 serve as cloud infrastructure.

I connected the first 3 of the VPCs so they could talk to each other using a transit gateway. I connected the 4th VPC which serves as an on-premise data center to talk to the first three VPCs using a site-to-site VPN.

Please find attached the architecture I designed for this project and also a link to the terraform code on GitHub.

There is still little work to be done on the VPN connection. I will finish it up when I have free time. You could replicate this infrastructure for your organization when you looking to move to the cloud.

https://github.com/samuelbartels20/ground-up-infrastructure

You can launch AWS services in a virtual network that you specify by creating an Amazon Virtual Private Cloud (Amazon VP...
08/09/2022

You can launch AWS services in a virtual network that you specify by creating an Amazon Virtual Private Cloud (Amazon VPC) that is logically separated from the rest of the AWS Cloud. You can choose your own IP address range, create your own subnets, configure route tables, and set up network gateways—you have total control over your virtual networking environment. For secure and convenient access to resources and applications, you can use both IPv4 and IPv6 in your VPC.
Within each Region, there are several, remote locations known as availability zones (AZ). All of the AZs in the Region are covered by a VPC.
An internet gateway (IGW) is a VPC component that enables communication between instances in your VPC and the internet. It is horizontally scaled, redundant, and highly available. As a result, your network traffic faces no availability risks or bandwidth constraints.
In this project, I created three VPC's with Internet Gateways, and EC2 instances - one per VPC. EC2 instances in different VPCs are not able to communicate with each other using private IP addresses.

Controlling resource deployments in your Kubernetes cluster can be a difficult task. For example, pushing changes to a p...
03/09/2022

Controlling resource deployments in your Kubernetes cluster can be a difficult task. For example, pushing changes to a production environment may result in the installation of an incompatible package or a vulnerable dependency that crashes your services. We can define strict regulations to launch only approved resources in our cluster by creating custom admission webhooks for Kubernetes.

In this project, I used the AWS Serverless application model to create serverless admission webhooks (SAM). I then set up a webhook to validate Amazon elastic kubernetes service (EKS) deployments against an image in the Amazon elastic container registry (ECR).

Use CloudWatch Container Insights to collect, aggregate, and summarize metrics and logs from your containerized applicat...
29/08/2022

Use CloudWatch Container Insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices. cloud watch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices. The metrics include the utilization of resources such as CPU, memory, disk, and network. The metrics are available in CloudWatch automatic dashboards.

Amazon CloudWatch Evidently is a new capability which helps application developers safely validate new features across t...
29/08/2022

Amazon CloudWatch Evidently is a new capability which helps application developers safely validate new features across the full application stack. With cloud watch evidently, cloud engineers could perform A/B testing of their application. A typical example is paypal and linkedin. When you live in Africa, paypal serves you with features different from those living in say UK or US. Same with Linkedin. If you live in Europe or North America, Linkedin has a video call feature. That feature doesn't apply to those who in African countries like Ghana.
As you could see from the below image, i was conducting an A/B testing for a feature i deployed.

I just discovered bpytop as an alternative to htop. I'm loving it already.
28/08/2022

I just discovered bpytop as an alternative to htop. I'm loving it already.

It's amazing, the kind of metrics one could analyse using AWS cloudwatch
28/08/2022

It's amazing, the kind of metrics one could analyse using AWS cloudwatch

AWS cloudwatch could really give you a whole lot of insights about your resources within your cloud infrastructure
28/08/2022

AWS cloudwatch could really give you a whole lot of insights about your resources within your cloud infrastructure

A whole lot of metrics could be analysed using AWS cloudwatch.As you could see from the image, i was analysing the EKS n...
28/08/2022

A whole lot of metrics could be analysed using AWS cloudwatch.
As you could see from the image, i was analysing the EKS nodes.

AWS cloud watch service map lens is a powerful tool that enables you to create a map of all the services you're running ...
28/08/2022

AWS cloud watch service map lens is a powerful tool that enables you to create a map of all the services you're running in your infrastructure. I used it to create a map of the previous application i deployed.

AWS provides native monitoring, logging, alarming, and dashboards via Amazon CloudWatch, as well as tracing via AWS X-Ra...
28/08/2022

AWS provides native monitoring, logging, alarming, and dashboards via Amazon CloudWatch, as well as tracing via AWS X-Ray. They provide the three pillars (Metric, Logs, and Traces) of an observability solution when deployed together.
In this project, I deployed a microservices app to AWS using the following infrastructure.
1. Source code repo- Github
2. Ide- AWS cloud9
3. IAC - AWS cloudformation
4. Deployment - AWS CDK
5. Database - Amazon Aurora PostgreSQL, AWS DynamoDB
6. Observability- AWS X-ray, AWS cloud watchGitHub, Grafana, Prometheus
8. Backend- Golang
9. Frontend- ReactJs
10. Notification - AWS SNS
11. AWS S3 for hosting frontend files
12. CI/CD - cloud9 -> GitHub -> circleCI(image build) - > ECR(hosting images)
13. AWS EKS for hosting backend APIs
14. Message Queuing Service - AWS SQS

25/08/2022

I have been building and deploying microservices on the cloud for some time now and each day it still feels like I don't know anything. It's ok if you're having imposter syndrome, I do not get it too sometimes. One thing that keeps me going is my desire to fight poverty with everything in me.
I have vowed to fight poverty with everything in me and this has always been my motivation. Keep going! one day you will look back and you will be proud of yourself for never giving up.

24/08/2022

The best career advise I have received so far is to productize myself, be useful to others and in return either learn from them or earn from them.

GitOps is a framework that uses version control to automate infrastructure deployment and maintenance (Git). Along with ...
23/08/2022

GitOps is a framework that uses version control to automate infrastructure deployment and maintenance (Git). Along with CI/CD, you can allow developers to focus on the code while it is automatically tested and deployed on a Kubernetes cluster.
In this project, I deployed an application to AWS, configured prometheus and grafana, and used CircleCI to manage Continuous Integration and ArgoCD to manage Continuous Deployment.

As microservices and cloud adoption increase, it's so important to monitor the resources you have in the cloud. Two tool...
21/08/2022

As microservices and cloud adoption increase, it's so important to monitor the resources you have in the cloud. Two tools that I have been using to monitor my resources in the cloud are Prometheus and grafana.

What Prometheus does is get data from your resources in the cloud. What grafana does is visualize those data so you could easily understand them. For Prometheus to be able to gather the data from the resources, it does so using exporters. For grafana to be able to visualize the data collected by Prometheus, you as a cloud engineer would need to specify Prometheus as a data source in grafana.

Hopefully, the below image will help you understand it more

21/08/2022

When looking to deploy your containerized applications, one of the biggest hurdles you will face is deciding whether to use AWS EKS or AWS ECS.

It's a big decision you will need to make and a long essay to write but i will like to summarize it from my experience.
AWS ECS is vendor specific that means its a service specific to only AWS.

If your company plans to go the vendor specific way; that means you want all your data on AWS then i suggest ECS. If your company plans to one day migrate their data from AWS to another vendor say GCP, then i suggest EKS.

One thing you need to know is, with ECS it's impossible to migrate your container data to another vendor. But with EKS, you can do that. In all, it's a decision your company will have to make taking into consideration several factors.

Personally, i recommend using EKS instead of ECS. This is because i wouldn't want my data locked in AWS forever.

Monitoring application metrics in the cloud could be some way especially if you do not own a dedicated server. One tools...
20/08/2022

Monitoring application metrics in the cloud could be some way especially if you do not own a dedicated server. One tools that has helped me to troubleshoot my deployments in the cloud is komodor.
Komodor is a troubleshooting platform for Kubernetes-based environments. It tracks changes and events across the entire system's stack and provides the needed context to troubleshoot issues in our Kubernetes cluster efficiently.

NB: As you could see from the images, i did all the deployments into the default namespace which is not good practice. In production, you will need to create custom namespaces for your deployments.

Disclaimer; I do not work for them nor have I been paid to promote them. It's a tool I have used hence my recommendation.

What will happen if I run this script?
19/08/2022

What will happen if I run this script?

As you work with disks in the cloud, one of the things yo will need to do is mounting those disks so they could be recog...
18/08/2022

As you work with disks in the cloud, one of the things yo will need to do is mounting those disks so they could be recognized. Use 'blkid' to get the block ID

Working with disks in the cloud could be overwhelming for a starter. This has always been my approach when it comes to w...
18/08/2022

Working with disks in the cloud could be overwhelming for a starter.
This has always been my approach when it comes to working on disks in the cloud.
Assuming I have servers running in the cloud, say AWS, and I need to attach disks to the servers. I created an EBS volume and attached it to the servers. The following are the next steps I will perform:
1. I will partition the disks using utilities like 'gdisk'
2. I will mark the partitioned disks as physical volume to be used by LVM(logical volume manager) using 'pvcreate' utility.
3. I will use 'vgcreate' utility to add all PVs to a volume group (VG).
4. I will use 'lvcreate' utility to create logical volumes.
5. I will use mkfs.ext4 to format the logical volumes with ext4 filesystem
6. I will finally mount the volumes.

Address


Website

Alerts

Be the first to know and let us send you an email when Samuel Bartels posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Shortcuts

  • Address
  • Alerts
  • Contact The Business
  • Claim ownership or report listing
  • Want your business to be the top-listed Media Company?

Share