Kurtzace is an umbrella of products that "Infuse excitement". Our products simplify your life and reduce your pain. We are creators of "Text To Voice", Kurtzac ePage and numerous more that are currently in our pipeline.
Kubernetes Workload on AWS Ep14 by Karan Bhandari
Kubernetes Workload on AWS Ep14 by Karan Bhandari
February 23, 2020 No Comments on Kubernetes Workload on AWS Ep14 by Karan Bhandari
https://github.com/weaveworks/eksctl
eksctl create cluster --config-file=
file.yml
Approximate Transcript
This is the technology icing part cash powered by current Bhandari and in this episode. I’m going to cover the topic of Kubernetes in AWS which is generally driven through E KS elastic community service.No let’s try to explain or typical scenario for example, or retail bank wants to open.A set of services available for the customers to consume for the mobile apps for the web block applications. You have a simple transfer screen annual have a simple.Payments module so let’s assume you’re building. 2 Microservices, an your payments module has something interesting. It accepts our check scanning an you could deposit to your account also apart from person to person pay so you would typically make it in this fashion, you would have.Are transactions micro service or payments microservice ,an accounts microservice an image recognition as well as OCR micro service?So, in order to go about.To start composing everything you would know have to know about a concept called’s Cuban it is deployments.Deployments is something that.You mention that I want 3 replicas of account service becausr accounts would be used not just for the transaction, but also in the from and two part of payments. You could say that you want 1 microservice of one replica of Microservice.Of accounts.And since you feel that.Your image recognition shall you solution would take more.Our resources you could also define how many nodes.It could occupy.Tensor flow Python has support for coordinator which can allow the tensors to be flowing through multiple nodes.And if it’s done poorly.I know in order to go about to do that. You need to define communities eml file.You would first need to install EKS CLI.Which is available it’s called E KS ctl eksctl ?And you need to ensure that.You have installed the latest version of Kubectl is available by doing a curl command and then you can also install mini kubectl so that you can do some testing in the local.An mini kubectl or depends on something like VM Ware. I have not seen much success in using the bare metal OS.It’s easier to do all this from a Linux machine. I feel it’s better to partition your machine.Into a Linux and Windows 1.And or maybe just work from easy to low powered machine to run your capabilities workloads.Of for only thepart where the kubectl see tail is installed, not where the actual workload is running.An certain instructions you need to keep in mind are kubectls. ETL get pods or kubectl. Cuttle get pods. kubectl get pods or kubectl get services kubectls get secrets.So these are some things that you would keep in mind.Oh.For finding out what’s running in your kubectl environment because kubectl see tail is the command line interface that talks to your pods your clusters your services your deployments you replicas.Ann.If you ever need to debug you need you may need to, you know find out some more advanced commands like you see TL describe pod.Or kubectl you can also do log so you can even do kubectl log.I know.You could mention the pod name like you, said your logs the nginxX or whatever the pod name is.Ann.For troubleshooting you may also need to know certain things like how to exact into a specific machine like you see TL exact and then interactive terminal it and then the pod name and then you mention you want to Exec as a bash or a shell script. You need to have some degree of comfort with.Linux command line over here.An yeah, just be in general aware of how to handle things like for folder permission issue make sure you have the sufficient rights.And No.In general make sure that if you are getting a message Pacific to Amazon like failed scheduling. That means you’re giving it a very low powered machine like T2 or 3 three or T3 Micro. Maybe you need to go to a T2 Medium and if it’s coming as field scheduling in even T2 medium. ABM file large is something that you need to go OK? What are these these are nothing but?Amazon machines with M is memory series T so, so you know the ones that are provided your account for free or almost free during the first eleven months are the T2 Micro and T2 Nano series.It’s better to first start with making a docker compose file so that you can get a hang of how your docker containers are working.Annum you if you if you were OK, with publishing your.Artifacts or your docker images to.Public then Docker Registry or docker hub is a good option, but I feel easier is a good option an in ECR,I feel you must not load.Everything that the container wants or sometimes what happens is for example, Python needs or training file like an H5 file or a spark solution needs some HD FS file which are typically numerous GBs.I feel you must load it.During runtime as soon as the server starts as soon as flash servers running or as soon as Park starts running you can start fetching the file from an S3 bucket because keeping larger files in easier is too expensive. But if you loaded from S3. You can even you know control whether the traffic hits your particular part or not or your container or not.The traffic is something that can be controlled by something called’s liveliness probe and readiness probe.And liveliness probe is when you APIs Radiant readiness is when you are OK for the Kubernetes Controller to start sending traffic to you so you can mention you know what could be your liveliness or readiness group. Typically, you could make an HTTP endpoint or some file system related probe to see if a particular file exist or not, and you can define that in your Kubectl deployment or the pod definition. so you could try making and ensuring that your easier images are slim so that even your building your deploying time is reduced. But it’s just that when the traffic starts routing it takes his own time.Your typical.So how is your services structured you would want to keep most of the things I was internal as possible, unless the application really wants it.Generally it’s difficult to you know expose every service So what I would recommend is if suppose you want to expose 3 services for example the account service you want to know you know you want to give the ability to see the list of accounts and transactions service you want to see the transactions.An image recognition you want to expose the point where it detects which paths which objects have been detected. Then you know you don’t go with adding exposing all your services. You keep all your services within the same cluster and then you maintain a new service called Ocelot, which is like an API gateway so in API gateway. You could define Ocelot service. An you could mention that servie to be load balancer with that ocelot definition, you can tell OK when I get my service and will you know, do like a 302 re direct to it, but the client won’t see it within the API gateway, Desiree direct gets the data. So now the best part about it, you just need to expose your API gateway.And all of those are clustered IP. So, your API gateway is something that has authentication. You could use Cognito or you can use Azure identity, or you could use auto an on just your API gateway an?Then the other others are are not exposed at all. The edges running in your cluster IP so even while during development. You don’t have to worry about authentication because all you are 3rd party. Traffic is going to be exposed over this API gateway. API gateway is available as a deployment. An you could write a service of type load balancing if you deploy it in.AWS ELB elastic load balancers already is by default provision to it.And dumb.There is there is an area, which I think Amazon providers own annotations for you know, specifying like what type of certificate would you like to use so every resource in?Amazon is controlled by.Something called as I am an?This I am can protect your Amazon resources from being utilized to what you could do is you can give that particular poured running and I am resource to pull the certificate or during the build time Amazon provides some annotations for the load balance.A foot example, you have an annotation, which is known as our service dot beta dot communities iOS slash AWS load balance. SSL Cert, you can provide that as an annotation and then you can provide them. The ARN resource of the certificate you have loaded in AWSAC and Amazon certificate manager.I know you could mention or you know the certificate. So, your your external load balancer.Which is of communities type load balancer?Everything else is a cluster IP down that load balance is the one that is exposed over HTTPS that could be a very simple solution. Of course, if you have you know a lot of URL rules?You could brightening English controller, but I recommend that you are you stick 2:00 o’clock doing most of your URL work.I know you could introduce other kinds of.The other services that have discussed his Cabinet is deployment. An images could be pulled from EC are and what they could be built is so you can have a WS code build to build to build you know your source code and converted and you know like do a docker build.And uh.Through the Docker compose and then push it to the East Amazon easier registry using code build.And then using code pipeline.And cold deployment you can deploy the solution using code deploying code deploy. You just have a series of shell scripts after install before install.And your code build could be written as a yamil file. Even the code deploy can be written as a yaml file an the code deploy and the code build so the code build will build it and code deploys the one that runs all the EPS commands or an EPS is very ESC. TL is a good tool for creating your clusters for example, I need 3 machines of T2 medium you can use the.EKCTL for it and for the for everything else. You would use the kubectl see tail commands for example, you would use kubectls. It’ll apply an pointed to your deployments directly, which has all the deployment eml files or kubectls details applies service.Annum.Are you could get the kubectl? To you know? Talk to?Your.Cuban 80s controller, so that it could orchestrate and run your workloads.I know.I would recommend that you would maintain your secrets in systems manager.Systems manager specific product called, is para meters 2, which is given by WS it’s it’s it’s free of cost.I know you need to maintain AKMS Key KMSKMS Key is paid for.But thenthe Parimeter store is free an but in order to encrypt values in the parimeter store you need to.You know go ahead an?Ensure thatOK, Ms Keys, present so the para meters. Talkin then while you’re running your kubectl create secrets. Command can be pulled from para meters to see your para meters to conversion your secrets. You can put their entire Jason value, but of course, it should be under 4 KB.Unless you wanna make an advance secret which is paid for so standard secrets are free.An make sure you know how to tear down your cluster using kubectls detail as well as ESC TL.Ann.Make sure that I you know you’re monitoring your nodes like by cloud watch on your application logs are being written to cabana.Ann.You could also he do an automatic code review by.Asking the code built to optionally run or sooner, scan that can show you, your code coverage in terms of test as well as how mature your code is.I know.Then there is something that’s to choose the code deploy could build an your.Code and your GitHub together what you could use is a code pipeline. Anna code pipeline allows you to even take approvals before you actually do the deployment code pipeline or allows you to you know set up a workflow that OK, when this change happens. Then you pull it into code build an you start building it and then you deploy it. But before deploying you may want an email approval, you may want to start.Some load balancing testing or your spec flow scenarios so.It’s good to you know use an ecosystem. You can use Jenkins for everything of what I would have just mentioned but I would like EW still manage most of it.A dub I feel that sometimes it’s OK for you to use.2 different cloud providers, but I recommend that you don’t do it because it will take time for one cloud provider. Talk to download cloud provider be cause of the different Geo locations. But sometimes so for example, if you want to use a cheaper solution like?For example, if you want to use the free part of Azure Web app. I think that basic app services free, which is 0.5 GB.And then there a AWS databases free the Dynamo DB so and initially if you want to start everything free and then you have your task written in dev.azure.com.So some people would want to start free and then move on to.Are you you know go to more costly resources?Then it’s OK for your dev environment to be like that. But for your you 80 and prod make sure that you.Don’t talk across clouds.Because it’s difficult for you to maintain slow or fast traffic. If you have a UI. I recommend that you put all your resources like the JavaScript CSS S in a CDN.Ann.Make sure you even within the internal IP internal cluster don’t talk using IP addresses use the name of the service. It’s easier.An I came across the situation where the services were not being able to discuss with each other like talk to each other typically or metadata and label is something that is very important to make sure that all of them have the same labels. Then you know the UAT environment can talk amongst each other. The dev environment can talk amongst each other. An yeah, the volumes I as fast possible try your best to use empty.Volume.But if you have a an admin with your DevOps engineer with you try to start the storage class PVS like provision volumes.But if you don’t have that luxury with you then you can just use the provisional volume and then you can plug in a cloud volume like.ELB or you could use as your file storage.I know.This way you will be able to maintain you know files and it’s it’s easier if your application is able to talk to S3 and pull it and put it into local either emptied our volume try not to use the host Derby cause that depends on the machine.But of course, if you just want to pull once.And not pull it again and again, then it’s OK to store in the machine.That’s it from my side, an if you have any doubts you could reach out to me on Twitter.I’m available at the rate of Ku RTZACE.KURTZACE that’s my handle, and my name is Karen Bhandari. I’m working as a technology lead in society general.Hope you have a good time, migrating to a community’s workload have issue productive rest of the week goodbye.
Podcast: Play in new window | Download
Leave a comment