Q1. What is Puppet?

I will advise you to first give a small definition of Puppet. Puppet is a Configuration Management tool which is used to automate administration tasks.

Now, you should describe how Puppet Master and Agent communicates.

Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave as shown on the diagram below. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave.

Refer the diagram below that explains the above description:

 

Q2. How Puppet Works?

For this question just explain Puppet Architecture. Refer the diagram below:

Puppet Master Slave Architecture - Puppet Interview Questions - Edureka

The following functions are performed in the above image:

  • The Puppet Agent sends the Facts to the Puppet Master. Facts are basically key/value data pair that represents some aspect of Slave state, such as its IP address, up-time, operating system, or whether it’s a virtual machine. I will explain Facts in detail later in the blog.
  • Puppet Master uses the facts to compile a Catalog that defines how the Slave should be configured. Catalogis a document that describes the desired state for each resource that Puppet Master manages on a Slave. I will explain catalogs and resources in detail later.
  • Puppet Slave reports back to Master indicating that Configuration is complete, which is visible in the Puppet dashboard.

Now the interviewer might dig in deep, so the next set of Puppet interview questions will test your knowledge about various components of Puppet.

Q3. What are Puppet Manifests?

It is a very important question and just make sure you go in a correct flow according to me you should first define Manifests.

Every node (or Puppet Agent) has got its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. Manifests are composed of Puppet code and their filenames use the .pp extension.

Now give an example, you can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master. 

Q4. What is Puppet Module and How it is different from Puppet Manifest?

For this answer I will prefer the below mentioned explanation:

A Puppet Module is a collection of Manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple Manifests. It is considered best practice to use Modules to organize almost all of your Puppet Manifests.

Puppet programs are called Manifests. Manifests are composed of Puppet code and their file names use the .pp extension. 

Q5What is Facter in Puppet?

You are expected to answer what exactly Facter does in Puppet so, according to me you should start by explaining:

Facter is basically a library that discovers and reports the per-Agent facts to the Puppet Master such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master’s Manifests as variables.  

Q6. What is Puppet Catalog?

I will suggest you to first, tell the uses of Puppet Catalog.

When configuring a node, Puppet Agent uses a document called a catalog, which it downloads from a Puppet Master. The catalog describes the desired state for each resource that should be managed, and may specify dependency information for resources that should be managed in a certain order.

If your interviewer wants to know more about it mention the below points:

Puppet compiles a catalog using three main sources of configuration info:

  • Agent-provided data
  • External data
  • Puppet manifests

Q7. What size organizations should use Puppet?

There is no minimum or maximum organization size that can benefit from Puppet, but there are sizes that are more likely to benefit. Organizations with only a handful of servers are unlikely to consider maintaining those servers to be a real problem, Organizations with many servers are more likely to find, difficult to manage those servers manually so using Puppet is more beneficial for those organizations.

 

Q: – What is Module and How it is different from Manifest ?

Whatever the manifests we defined in modules, can call or include into other manifests. Which makes easier management of Manifests.It helps you to push specific manifests on specific Node or Agent.

Q: – Command to check requests of Certificates ?

puppetca –list (2.6)
puppet ca list (3.0)

Q: – Command to sign Requested Certificates

puppetca  –sign hostname-of-agent (2.6)
puppet ca  sign hostname-of-agent (3.0)

Q: – Where Puppet Master Stores Certificates

/var/lib/puppet/ssl/ca/signed

Q: – What is Facter ?

Sometime you need to write manifests on conditional experession based on agent specific data which is available through Facter. Facter provides information like Kernel version,Dist release, IP Address, CPU info and etc.You can defined your facter also.

Q: – What is the use of etckeeper-commit-post and etckeeper-commit-pre on Puppet Agent ?

etckeeper-commit-post: In this configuration file you can define command and scripts which executes after pushing configuration on Agent
Etckeeper-commit-pre: In this configuration file you can define command and scripts which executes before pushing configuration on Agent

Q: – What is Puppet Kick ?

By default Puppet Agent request to Puppet Master after a periodic time which known as “runinterval”. Puppet Kick is a utility which allows you to trigger Puppet Agent from Puppet Master.

Q: – What is MCollective ?

MCollective is a powerful orchestration framework. Run actions on thousands of servers simultaneously, using existing plugins or writing your own.

Q. Describe the most significant gain you made from automating a process through Puppet?
“I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles paradigm and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they’ve been improved by my teammates and members of the community.”

Q. Tell me about a time when you used collaboration and Puppet to help resolve a conflict within a team?
The development team wanted root access on test machines managed by Puppet in order to make specific configuration changes. We responded by meeting with them weekly to agree on a process for developers to communicate configuration changes and to empower them to make many of the changes they needed. Through our joint efforts, we came up with a way for the developers to change specific configuration values themselves via data abstracted through Hiera. In fact, we even taught one of the developers how to write Puppet code in collaboration with us.”

Q. Which open source or community tools do you use to make Puppet more powerful?
Changes and requests are ticketed through Jira and we manage requests through an internal process. Then, we use Git and Puppet’s Code Manager app to manage Puppet code in accordance with best practices. Additionally, we run all of our Puppet changes through our continuous integration pipeline in Jenkins using the beaker testing framework.”

 

Q.What is the use of Virtual Resources in Puppet

First you need to define Virtual Resource.

Virtual Resources specifies a desired state for a resource without necessarily enforcing that state. Although virtual resources can only be declared once, they can be realized any number of times.

I will suggest you to mention the uses of Virtual Resources as well:

  • Resources whose management depends on at least one of multiple conditions being met.
  • Overlapping sets of resources which might be needed by any number of classes.
  • Resources which should only be managed if multiple cross-class conditions are met.

Q. Can I access environment variables with Facter in Puppet?

I will suggest you to start this answer by saying:

Not directly. However, Facter reads in custom facts from a special subset of environment variables. Any environment variable with a prefix of FACTER_ will be converted into a fact when Facter runs.

Now explain the interviewer with an example:

1
2
3
4
$ FACTER_FOO=”bar”
 $ export FACTER_FOO</span>
 $ facter | grep  ‘foo’</span>
   foo => bar

The value of the FACTER_FOO environment variable would now be available in your Puppet manifests as $foo, and would have a value of ‘bar’. Using shell scripting to export an arbitrary subset of environment variables as facts is left as an exercise for the reader.

Q. Tell me about a time when you used collaboration and Puppet to help resolve a conflict within a team?

Explain them about your past experience of Puppet and how it was useful to resolve conflicts, you can refer the below mention example:

The development team wanted root access on test machines managed by Puppet in order to make specific configuration changes. We responded by meeting with them weekly to agree on a process for developers to communicate configuration changes and to empower them to make many of the changes they needed. Through our joint efforts, we came up with a way for the developers to change specific configuration values themselves via data abstracted through Hiera. In fact, we even taught one of the developers how to write Puppet code in collaboration with us.

1) DevOps ! How can you define it in your words ?

Its highly effective daily collaboration between software developers and IT operations / web operation engineers to produce a working system or release software.

A devOps implementation is generally aligned with Agile methodologies where deploying working software to Production is generally the highest priority. On Agile implementations, emphasis is placed on people over processes, so a DevOps engineer must be willing to work very closely with Agile development teams to ensure they have an environment necessary to support functions such as automated testing, continuous Integration and continuous Delivery. On a traditional implementation, without DevOps, the operations team is often isolated from developers, often working under a help desk model under general service level agreements where the system operations team treats developers as a customer. This is a proven model which obviously can work very well, but in a DevOps environment, development and operations are streamlined and barriers between the two groups should not exist.

2) Why we need DevOps ?

Companies are now facing the need to delivering more and faster and better applications to meet the ever more pressing demands of conscious users to reduce the ” Time To Market “. Devops often helps deployment to happen very fast.

3) What is agile development and Scrum ?

Agile development used as an alternative to Waterfall development practice. In Agile, the development process is more iterative and incremental, there is more testing and feedback at every stage of development as opposed to only the last stage in Waterfall.

Scrum is used to manage complex software and product development, using iterative and incremental practices. Scrum has three roles ie product owner, scrum master, and team.

4) Can we consider DevOps as an agile methodology ?

Of course! DevOps is a movement to reconcile and synchronize development and production start through a set of good practices . Its emergence is motivated by a deep changing demands of business, who want to speed up the changes to stick closer to the requirements of business and the customer.

5) What is DevOps engineer’s duty with regards to Agile development ?

DevOps engineer work very closely with Agile development teams to ensure they have an environment necessary to support functions such as automated testing, continuous Integration and continuous Delivery. DevOps engineer must be in constant contact with the developers and make all required parts of environment work seamlessly.

Technical Questions

6) Have you worked on  containers ? 

Containers are form of lightweight virtualization, more heavy than chroot but lighter than hypervisors. They provide isolation among processes while using same kernel as the host machine, and cgroups functionality within kernel. But container formats differ among themselves in a way that some provide more VM-like experience while other containerize only application.

LXC containers are most VM-like and most heavy weight, while Docker used to be more light weight and was initially designed for single application container. But in more recent releases Docker introduced whole machine containerization features so now Docker can be used both ways. There is also rkt from CoreOS and LXD from Canonical, which builds upon LXC.

7) What is Kubernetes? Explain

It is massively scalable tool for managing containers, made by Google. It is used internally on huge deployments and because of that it is maybe the best option for production use of containers. It supports self healing by restating non responsive containers, it pack containers in a way that they take less resources and has many other great features.

8) What is the function of CI (Continuous Integration) server ? 

CI server function is to continuously integrate all changes being made and committed to repository by different developers and check for compile errors. It needs to build code several times a day, preferably after every commit so it can detect which commit made the breakage if the breakage happens.

Note: Other available and popular CI tools are  Jenkins, TeamCity, CircleCI , Hudson, Buildbot etc

9) What is Continuous Delivery ?

Is it practice of delivering the software for testing as soon as it is build by CI (Continuous Integration) server’s. It requires heavy use of Versioning Control System for so always available to developers and testers alike.

10) What is Vagrant and what is it used for ?

Vagrant is a tool that can create and manage virtualized (or containerized)  environments for testing and developing software. At first, Vagrant used virtualbox as the hypervisor for virtual environments, but now it supports also KVM.

11) Do you ever used any scripting language ? 

As far as scripting languages go, the simpler the better. In fact, the language itself isn’t as important as understanding design patterns and development paradigms such as procedural, object-oriented, or functional programming.

Currently, several scripting languages are available so the question arises : what is the most appropriate language for DevOps approach?  Simply everything , it depends on the context of the project and tools used for example if Ansible used its good have knowledge in Python  and if its for Chef its on Ruby.

12) What is the role of a configuration management tool in devops ?

Automation plays an essential role in server configuration management. For that purpose we use CM tools , they store information about versions and builds of the software and testware and provide the traceability between software and testware.

13) What is the purpose of CM tools and which one you have used ?

Configuration Management tools’ purpose is to automatize deployment and configuration of software on big number of servers. Most CM tools usually use agent architecture which means that every machine being manged needs to have agent installed. My favorite tool is one that uses agentless architecture – Ansible. It only requires SSH and Python. And if raw module is being used, not even Python is required because it can run raw bash commands. Other available and popular CM tools are Puppet, Chef, SaltStack.

14) What is OpenStack ?

OpenStack is often called Cloud Operating System, and that is not far from the truth. It is the complete environment for deploying IaaS which gives you possibility of making your own cloud similar to AWS. It is highly modular and consists of many sub-projects so you can pick and chose which functionality you need. OpenStack distribution are available from Red Hat, Mirantis, HPE, Oracle, Canonical and many others. It is completely open source project but some vendors make proprietary distributions.

15) Classify Cloud Platforms anategory ?

Cloud Computing software can be classified as Software as a Service or SaaS, Infrastructure as a Service or IaaS and Platform as a Service or PaaS.

SaaS is peace of software that runs over network on remote server and has only user interface exposed to users, usually in web browser. For example salesforce.com.

Infrastructure as a service is a cloud environment that exposes VM to user to use as entire OS or container where you could install anything you would install on your server. Example for this would be OpenStack, AWS, Eucalyptus.
PaaS allows users to deploy their own application on the preinstalled platform, usually framework of application server and suite of developer tools. Examples for this would be OpenShHeroku.

16) What are easiest ways to build a small cloud ?

VMfest is one one of the options for making IaaS cloud from VirtualBox VMs in no time. If you want a lightweight PaaS there is Dokku which is basically a bash script that makes PaaS out of Dokku containers.

17) What is AWS (Amazon Web Services)? Did got chance to work on Amazon tools ?

AWS provides a set of flexible services designed to enable companies to create and deliver products with greater speed and reliability using AWS and DevOps practices . These services simplify commissioning and infrastructure management , application code deployment , automated software release process and monitoring of the application and infrastructure performance. Amazon used tools like AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline etc, that helps to make devops easier.

18) What is EC2 ?

Amazon EC2 Container Service (ECS) is a highly scalable container management service and high performance that supports the Docker containers and allows you to easily run applications on a cluster managed by Amazon EC2 instances.

The EC2 service is inseparable from the concept of Amazon Machine Image – AMI . The May is Indeed the image of a virtual machine That Will Be Executed . EC2 based on XEN virtualization , that’s why it is quite easy to move XEN servers to EC2 .

19) Do you find any advantage of using NoSQL database over RDBMS ?

Typical web applications are built with a three-tier architecture. To carry the load, more Web servers are simply added behind a load balancer to support more users. The ability to scale out is a key principle in the world of cloud computing, more and more important in which VM instances can be easily added or removed to meet demand.

However, when it comes to the data layer, relational databases (RDBMS) does not allow a passage to the simple scale and do not provide a flexible data model. Manage more users means adding more servers and large servers are very complex, owners and disproportionately expensive, in contrast to low-cost hardware, the “commodity hardware”, architectures in the cloud. Organizations are beginning to see performance issues with their relational databases for existing or new applications. Especially as the number of users increases, they realize the need for a faster and more flexible basis. This is the time to begin to assess and adopt NoSQL database like in their Web applications.

20) What are the main SQL migration difficulties NoSQL ?

Each record in a relational database according to a schema – with a fixed number of fields (columns) each having a specified object and a data type. Each record is the same. The data is denormalized in several tables. The advantage is that there is less of duplicate data in the database. The downside is that a change in the pattern means performing several “alter table” that require expensive to lock multiple tables simultaneously to ensure that change does not leave the database in an inconsistent state.

With databases data, on the other hand, each document can have a completely different structure from other documents. No additional management is required on the database to manage changes in the schemes.

21) What are the benefits of NoSQL databases Documents ?

The main advantages of document databases are the following :

  • flexible data model data can be inserted without a defined schema and format of the data that is inserted can change at any time , providing extreme flexibility , which ultimately allows a significant agility to business
  • Consistent , high-performance Advanced NoSQL database technologies are putting cache data , transparently, in system memory ; a behavior that is completely transparent to the developer and the team in charge of operations .
  • Some easy scalability NoSQL databases automatically propagate data between servers , requiring no participation applications. Servers can be added and removed without disruption to applications , with data and I/O spread across multiple servers.

22 ) What are the main advantages of Git over CVS ?

The biggest advantage is that Git is distributed while CVS is centralised. Changes in CVS are per file, while changes (commits) in Git they always refer to the whole project. Git offers much more tools than CVS.

23) Difference between containers and virtual machines ?

Each VM instantiation requires starting a full OS. VMs take up a lot of system resources. This quickly adds up to a lot of RAM and CPU cycles. Container host uses the process and file system isolation features of the linux kernel.

24)  What is CoreOS, and what are alternatives ?

CoreOS is striped down linux distribution meant for running containters, mainly with its own rkt format but others are also supported. It was initially based on ChromeOS and supported Docker. The alternatives to this are canonical’s ubuntu snappy or red hat enterprise linux atomic host. Of course, Containers can also be ran on regular Linux system.

25)  What is Kickstart ?

It is a way to install Red Hat based systems by automated way. During manual install process, Anaconda installer creates file anaconda-ks.cfg which then can be used with system-config-kickstart tool to install same configuration automatically on multiple systems.

26) What are tools for network monitoring? List few

For example, Nagios, Icinga 2, OpenNMS, Splunk and Wireshark. Those tools are used to monitor network traffic, network quality and detect network problems even before they arise. Of those listed, only Splunk is proprietary other are open source.

27) What is Juju ?

Juju is orchestration tool primarily for ubuntu for management, provision and configuration on Ubuntu systems. It is was initially written in Python and since have been rewritten in Go.

28) Give me an examples of how you would handle projects ?

As a DevOps engineer, I would demonstrate a clear understanding of DevOps project management tactics and also work with teams to set objectives, streamline workflow, maintain scope, research and introduce new tools or frameworks, translate requirements into workflow and follow up. I would resort to CI, release management and other tools to keep interdisciplinary projects on track.

29) What is post mortem meetings ?

It is a meeting where we discuss what went wrong and what steps should be taken so that failure doesn’t happen again. Post mortem meetings are not about finding the one to be blamed, they are for preventing outages from reoccurring and planing redesign of the infrastructure so that downtime can be minimised. It is about learning from mistakes.

30) What you know about serverless model ?

Serverless refers to a model where the existence of servers is hidden from developers. It means you no longer have to deal with capacity, deployments, scaling and fault tolerance and OS. It will essentially reducing maintenance efforts and allow developers to quickly focus on developing codes.

Examples are Amazon AWS Lambda and Auth0 serveless platform.

Devops Example : Deploying Applications with Ansible

Ansible is a lightweight, extensible solution for automating your application provisioning. Ansible has no dependencies other than Python and SSH. It doesn’t require any agents to be set up on the remote hosts and it doesn’t leave any traces after it runs either. It allows you to significantly simplify our operations by creating easy YAML based playbooks. It’s good for configuration automation, deployments and orchestration.

Components of Ansible

Playbooks : Ansible playbooks are a way to send commands to remote computers in a scripted way. Instead of using Ansible commands individually to remotely configure computers from the command line, you can configure entire complex environments by passing a script to one or more systems.

Ansible playbooks are written in the YAML data serialization format. If you don’t know what a data serialization format is, think of it as a way to translate a programmatic data structure (lists, arrays, dictionaries, etc) into a format that can be easily stored to disk. The file can then be used to recreate the structure at a later point. JSON is another popular data serialization format, but YAML is much easier to read.

Let’s look at a basic playbook that allow us to install a web application (nginx) in a multiple hosts :

hosts: webservers
tasks:
– name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
– start nginx

handlers:
– name: start nginx
service: name=nginx state=started

The hosts file : (by default under /etc/ansible/hosts) this is the Ansible Inventory file, and it stores the hosts, and their mappings to the host groups (webservers ,databases etc)

[webservers] 10.0.15.22
# example of setting a host inventory by IP address.
# also demonstrates how to set per-host variables.[repository_servers] example-repository
#example of setting a host by hostname. Requires local lookup in /etc/hosts
# or DNS.
[dbservers] db01

The SSH key : For the first run, we’ll need to tell ansible the SSH and Sudo passwords, because one of the thing that the common role does is to configure passwordless sudo, and deploy a SSH key. So in this case ansible can execute the playbook’s commands in the remote nodes (hosts ) and deploy the web application nginx.

Conclusion

Those are some of the questions you might encounter during the interview but when learning about DevOps concepts you by no means should only concentrate on those read everything and anything related to Linux and open source and try any software you might be of any use to you. This article hopefully gives idea where to start. Thank you for reading.

Q. What is Ansible?
Ansible is developed in Python language.
It is a software tool. It is useful while deploying any application using ssh without any downtime. Using this tool one can manage and configure software applications very easily.

Q. How Ansible Works?
There are many similar automation tools available like Puppet, Capistrano, Chef, Salt, Space Walk etc, but Ansible categorize into two types of server: controlling machines and nodes.
The controlling machine, where Ansible is installed and Nodes are managed by this controlling machine over SSH. The location of nodes are specified by controlling machine through its inventory.
The controlling machine (Ansible) deploys modules to nodes using SSH protocol and these modules are stored temporarily on remote nodes and communicate with the Ansible machine through a JSON connection over the standard output.

Ansible is agent-less, that means no need of any agent installation on remote nodes, so it means there are no any background daemons or programs are executing for Ansible, when it’s not managing any nodes.

Ansible can handle 100’s of nodes from a single system over SSH connection and the entire operation can be handled and executed by one single command ‘ansible’. But, in some cases, where you required to execute multiple commands for a deployment, here we can build playbooks.
Playbooks are bunch of commands which can perform multiple tasks and each playbooks are in YAML file format.

Q.What’s the Use of Ansible.
Ansible can be used in IT Infrastructure to manage and deploy software applications to remote nodes. For example, let’s say you need to deploy a single software or multiple software to 100’s of nodes by a single command, here ansible comes into picture, with the help of Ansible you can deploy as many as applications to many nodes with one single command, but you must have a little programming knowledge for understanding the ansible scripts.

We’ve compiled a series on Ansible, title ‘Preparation for the Deployment of your IT Infrastructure with Ansible IT Automation Tool‘, through parts 1-4 and covers the following topics.

Q. How would you describe yourself in terms of what you do and how you’d like to be remembered?
Obviously I’d like to be remembered as a master of prose who forever changed the face of literature as we know it, but I’m going to have to settle for being remembered as a science fiction writer (and, more and more, critic) who wrote the occasional funny line and picked up a few awards.

Q. Why are you attracted to science and science fiction?
Early imprinting, maybe, for the science fiction. When I was quite small a family friend let me read his 1950s run of ‘Galaxy’ magazine. My favourite aunt pressed John Wyndham’s ‘The Day of the Triffids’ on me; a more terrifying great-aunt gave me G.K. Chesterton’s fantastic novels; and so on.
The incurable addiction had begun. Meanwhile, science classes just seemed to be the part of school that made most sense, and I fell in love with Pelican pop-maths titles – especially Kasner’s and Newman’s ‘Mathematics and the Imagination’ and all those books of Martin Gardner’s ‘Scientific American’ columns.

Q. Tell us about your software company and what sort of software it produced(s).
This goes back to the 1980s and the Apricot home computers, the early, pretty and non-PC-compatible ones. My pal Chris Priest and I both used them for word processing, and he persuaded me to put together a disk of utilities to improve the bundled ‘SuperWriter’ w/p, mostly written in Borland Turbo Pascal 3 and later 4: two-column printing, automated book index preparation, cleaning the crap out of the spellcheck dictionary, patching SuperWriter to produce dates in UK format, and so on.

Then I redid the index software (‘AnsibleIndex’) in CP/M for the Amstrad PCW and its Locoscript word processors. When the Apricot market collapsed, I wrote an Apricot emulator in assembler so that people could keep using their horrible but familiar old software on a PC. Eventually, in a fit of nostalgia, I collected all my columns for ‘Apricot File’ and various Amstrad PCW magazines as books unoriginally titled ‘The Apricot Files’ and ‘The Limbo Files’. (That’s probably enough self-promotion, but there’s lots more at HTTP://ANSIBLE.CO.UK/.)

Q. Describe your newsletter Ansible and who it’s aimed at.
It appears monthly and has been called the ‘Private Eye’ of science fiction, but isn’t as cruel and doesn’t (I hope) recycle old jokes quite as relentlessly. Though I feel a certain duty to list some bread-and-butter material like conventions, award winners and deaths in the field, ‘Ansible’ skips the most boring SF news – the long lists of books acquired, books published, book sales figures, major new remainders – in favour of quirkier items and poking fun at SF notables. The most popular departments quote terrible lines from published SF/fantasy and bizarre things said about SF by outsiders (‘As Others See Us’). All the back issues of ‘Ansible’ since it started in 1979 can be read online.

Q: What are the advantages of using Ansible?
The main three advantages of using this tool is,i.e. Ansible

1. Agentless
2. Very low overhead
3. Good performance

Q: So how does Ansible work? Please explain in detail?
Within the market, they are many automation tools like Puppet, Capistrano, Chef, Salt, Space Walk etc.

When it comes to Ansible, this tool is categorized into two types of servers:
1. Controlling machines
2. Nodes

Ansible is an agentless tool so it doesn’t require any mandatory installations on remote nodes. So there is no background programs that are executed while it is managing any nodes.
Ansible is able to handle a lot of nodes from a single system over SSH connection.
Playbooks are defined as a bunch of commands where they are capable of performing multiple tasks and they are in YAML file format.

Q: Do we have any Web Interface/ Rest API etc for this?
Yes, Ansible Inc makes a great efficient tool. It is easy to use.

Q: What is Ansible Tower?
Ansible is classified as a web-based solution which makes Ansible very easy to use. It is considered to be or acts like a hub for all of your automation tasks. The tower is free for usage till 10 nodes.

Q: How do change the documentation and submit it?
Usually, the documentation is kept in the main project folder in the git repository.
For complete instructions on this can be available in docs.

Q: How do you access Shell Environment Variables?
If you are just looking to access the existing variables then you can use “env” lookup plugin.
For example:
Accessing the value of Home environment variable on management machine:

local_home:”{{lookup(‘env’,’HOME’)}}”

Q: How can you speed up management inside in EC2?
It is not advised to manage a group of EC2 machines from your laptop.
The best way is to connect to a management node inside Ec2 first and then execute Ansible from there.

Q: How can you disable Cowsay?
If Cowsay is installed then executing your playbooks within Ansible is very smooth.
Even if you think that you want to work in a professional cow free environment, then you will have two options:
1.  Uninstall cowsay
2. Setting up value for the environment variable, like below

Export ANSIBLE_NOCOWS=1

Q: How can you access a list of Ansible_Variables?
By default, Ansible gathers facts under machines under management. Further, these facts are accessed in Playbooks and in templates. One of the best ways to view a list of all the facts that are available in a machine, then you need to run the setup module in the ad-hoc way:

Ansible- m setup hostname

Once this statement is executed, it will print out a dictionary of all the facts that are available for that particular host. This is the best way to access the list of Ansible_variables.

Q: How can you see all the inventory variables that are defined in the host?
The best way to see all the inventory variables is by executing this command below:

Ansible - m debug- a “var=hostvars[‘hostname’]” localhost

Q: Why don’t you ship in X format?
They are several reasons for not shipping in X format. In general, it caters towards maintainability. Within the market, they are tons of different ways to ship software and it is very tedious to support all of them.

Q: What is that Ansible can do?
Ansible can do the following for us:
1. Configuration management
2. Application deployment
3. Task automation
4. IT orchestration

Q: Please define what is Ansible Galaxy?
Ansible Galaxy refers to the website Galaxy where the users will be able to share all the roles to a CLI ( Command Line interface) where the installation, creation, and managing of roles happen

Q: Do you know what language Ansible is written in?
Ansible is written in Python and PowerShell
 
Q: Please explain what is Red Hat Ansible?
Ansible and Ansible Tower by Red Hat, both are an end to end complete automation platforms which are capable of providing the following features or functionalities:

1. Provisioning
2. Deploying applications
3. Orchestrating workflows
4. Manage IT systems
5. Configuration of IT systems
6. Networks
7. Applications

All of these activities are dealt by Ansible where it can help the business to solve the real time business problems.

Q: Is Ansible is an open source tool?
Yes, Ansible is an open source tool which is a powerful automation software tool that one can use.

Q: Why you have to learn Ansible?
Ansible is more a tool for servers but does it have anything for networking. If you closely look into it, there is some support available in the market for networking devices. Using this tool, it will give you an overall view of your environment and also the knowledge how it works when it comes to network automation.

It is one of those tools where it is considered to be good to explore a new tool.

Q: What are Ansible server requirements?
If you are a windows user then you need to have a virtual machine in which Linux should be installed.
It requires Python 2.6 version and higher.

Q: How can you connect to other devices within Ansible?
Once, Ansible is installed and the basic setup has been completed, an inventory is created. This would be the base and one can start testing ansible. To connect to a different device then you have to use “Ping module”. This can be used as a simple connection test.

Ansible - m ping all

Q: Can you build your own modules with Ansible?
Yes, we can create or own modules within Ansible.
It is an open source tool which primarily works on Python. If you are good at programming in Python you can start creating your own modules in few hours from scratch and you don’t need to have any prior knowledge of the same.

Q: How can you find information in Ansible?
After completing the basic setup, one has to make sure to find out the module called “setup” module. Using this setup module, you will be able to find out a lot of information.

Q: What does Fact mean in Ansible?
The term “Facts” is commonly used in Ansible environment. They are described in the playbooks areas where it displays known and discovered variables about the system.  Facts are used to implement conditionals executions and also used for getting ad-hoc information of the information.

You can see all the facts via:

$ ansible all- m setup

So if you want to extract only certain part of the information then you can use “setup” module where you will have an option to filter out the output and just get hold of the fact that you are in need of.

Q: What is ask_pass in ansible?
 The ask_pass is a control in Ansible Playbook.
This controls whether ansible playbook to prompt a password by default. Usually, the default behavior is no:

It is always set to ask_pass=True

If you are using SSH keys for authentication purposes then you really don’t have to change this setting at all.

Q: Explain What is ask_sudo_pass
This control is very similar to ask_pass
The ask_sudo_pass controls the Ansible Playbook to prompt a sudo password. Usually, the default behavior is no: 

ask_sudo_pass= True

One has to make sure and change this setting where the sudo passwords are enabled most of the time.

Q: Explain what is ask_vault_pass?
Using this control we can determine whether Ansible Playbook should prompt a password for the vault password by default. As usual, the default behavior is no

ask_vault_pass= True

Q: Explain Callback_plugin in Ansible?
Callbacks are explained as a piece of code in ansible environments where get is used call a specific event and permit the notifications.

This is more sort of a developer related feature and allows low-level extensions around ansible so that they can be loaded from different locations without any problem.

Q: Explain Module utilities in Ansible? 
Ansible provides a wide variety of module utilities which help the developers while developing their own modules. The basic.py is a module which provides the main entry point for accessing the Ansible library and using those as basics one can start off working.

Q: Where is the unit testing is available in Ansible?
Unit tests for all the modules are available in .test/units/modules.
Firstly you have to setup your testing environment

Q: Explain in detail about ad-hoc command?
Well, ad-hoc commands is nothing but a command which is used to do something quickly and it is more sort of a one-time use.  Unlike, the playbook is used for a repeated actions which is something that is very useful in Ansible environment. But there might be scenarios where we want to use ad-hoc commands which can simply do the required activity and it is a nonrepetitive activity.

 

How do I copy files recursively onto a target host?

A) The “copy” module has a recursive parameter. However, take a look at the “synchronize” module if you want to do something more efficient for a large number of files. The “synchronize” module wraps rsync. See the module index for info on both of these modules.

Ansible Interview Questions # How do I access shell environment variables?

A) If you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the HOME environment variable on the management machine:

---
# ...
  vars:
     local_home: "{{ lookup('env','HOME') }}"

If you need to set environment variables, see the Advanced Playbooks section about environments.

Starting with Ansible 1.4, remote environment variables are available via facts in the ‘ansible_env’ variable:

{{ ansible_env.SOME_VARIABLE }}

Ansible Interview Questions # do I generate crypted passwords for the user module?

A) The mkpasswd utility that is available on most Linux systems is a great option:

mkpasswd --method=sha-512

If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords using Python. First, ensure that the Passlib password hashing library is installed:

pip install passlib

Once the library is ready, SHA512 password values can then be generated as follows:

python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.using(rounds=5000).hash(getpass.getpass())"

Use the integrated Hashing filters to generate a hashed version of a password. You shouldn’t put plaintext passwords in your playbook or host_vars; instead, use Using Vault in playbooks to encrypt sensitive data.

 

Ansible Interview Questions # Is there a web interface / REST API / etc?

A) Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.

 

Ansible Interview Questions # How do I keep secret data in my playbook?

A) If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see Using Vault in playbooks.

In Ansible 1.8 and later, if you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:

- name: secret task
  shell: /usr/bin/do_something --value={{ secret_value }}
  no_log: True

This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.

The no_log attribute can also apply to an entire play:

- hosts: all
  no_log: True

Though this will make the play somewhat difficult to debug. It’s recommended that this be applied to single tasks only, once a playbook is completed. Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.

Ansible Real Time Interview Questions And Answers

Ansible Interview Questions # When should I use {{ }}? Also, how to interpolate variables or dynamic variable names

A) A steadfast rule is ‘always use {{ }} except when when:‘. Conditionals are always run through Jinja2 as to resolve the expression, so when:failed_when: and changed_when: are always templated and you should avoid adding {{}}.

In most other cases you should always use the brackets, even if previously you could use variables without specifying (like with_ clauses), as this made it hard to distinguish between an undefined variable and a string.

Another rule is ‘moustaches don’t stack’. We often see this:

{{ somevar_{{other_var}} }}

The above DOES NOT WORK, if you need to use a dynamic variable use the hostvars or vars dictionary as appropriate:

{{ hostvars[inventory_hostname]['somevar_' + other_var] }}

Ansible Interview Questions # Why don’t you ship in X format?

A) Several reasons, in most cases it has to do with maintainability, there are tons of ways to ship software and it is a herculean task to try to support them all. In other cases there are technical issues, for example, for python wheels, our dependencies are not present so there is little to no gain.

 How do I see all the inventory vars defined for my host?

A) By running the following command, you can see vars resulting from what you’ve defined in the inventory:

ansible -m debug -a "var=hostvars['hostname']" localhost

Ansible Interview Questions # How do I loop over a list of hosts in a group, inside of a template?

A) A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like this:

{% for host in groups['db_servers'] %}
    {{ host }}
{% endfor %}

If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:

- hosts:  db_servers
  tasks:
    - debug: msg="doesn't matter what you do, just that they were talked to previously."

Then you can use the facts inside your template, like this:

{% for host in groups['db_servers'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

Ansible Interview Questions # How do I access a variable name programmatically?

A) An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like so:

{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}

The trick about going through hostvars is necessary because it’s a dictionary of the entire namespace of variables. ‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop.

Ansible Interview Questions # How do I access a variable of the first host in a group?

A) What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we are using dynamic inventory, which host is the ‘first’ may not be consistent, so you wouldn’t want to do this unless your inventory is static and predictable. (If you are using Ansible Tower, it will use database order, so this isn’t a problem even if you are using cloud based inventory scripts).

Anyway, here’s the trick:

{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}

Notice how we’re pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you could use the Jinja2 ‘#set’ directive to simplify this, or in a playbook, you could also use set_fact:

- set_fact: headnode={{ groups[['webservers'][0]] }}

- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}

Notice how we interchanged the bracket syntax for dots – that can be done anywhere.

How do I handle python pathing not having a Python 2.X in /usr/bin/python on a remote machine?

A) While you can write ansible modules in any language, most ansible modules are written in Python, and some of these are important core ones.

By default, Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python, specifically 2.6 or higher.

Setting the inventory variable ‘ansible_python_interpreter’ on any host will allow Ansible to auto-replace the interpreter used when executing python modules.

Thus, you can point to any python you want on the system if /usr/bin/python on your system does not point to a Python 2.X interpreter.

Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python 2.

Python 3 support is being worked on but some Ansible modules are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on a managed host.

Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.

Ansible Interview Questions # What is the best way to make content reusable/redistributable?

A) If you have not done so already, read all about “Roles” in the playbooks documentation. This helps you make playbook content self-contained, and works well with things like git submodules for sharing content with others.

If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.

Ansible Interview Questions # Where does the configuration file live and what can I configure in it?

A) See Configuration file.

Ansible Interview Questions # How do I disable cowsay?

A) If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set the ANSIBLE_NOCOWS environment variable:

export ANSIBLE_NOCOWS=1

Ansible Interview Questions # How do I see a list of all of the ansible_ variables?

A) Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:

ansible -m setup hostname

This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe the output to a pager.

February 11, 2015

It’s mostly an opinion: who loves Microsoft’s world will tell you that asp.net is better even for job opportunities… who prefers the rest, will say that php is better and will point you the huge amount of sites done with php (and open source cms and so on…), which are more than asp sites. Ever since Microsoft has come up with ASP.net, there has been a widespread debate among programmers as to whether it is any better than the existing open source programming language of PHP.

php vs asp
php vs asp

 

If you were to make a search on the Internet on how loyalists of both PHP and ASP.net are doing almost everything by biting each other’s heads off, you will realize how hot this debate actually is. The major contention is that Microsoft products are generally considered to be superior to other products, but then there are programmers that have been using PHP since ages and never once has it let them down. While there is acclaim for ASP.net being more robust and speedier, PHP fans maintain that PHP has much better support and a very easy to understand language.

 

php vs asp

 

As the debate between PHP and ASP.net rages on, it is important to make a frank comparison between the two languages, so that other developers who are not so strong in their opinions are not caught in the argument between the two.

Both are better, both are Faster and both can write any kind of program. Asp.net supports a huge framework and has many built components and has a grate editing environment (Visual Studio ) through which you can easily write Asp program without worrying about case sensitivity, while PHP editing tool is work such like a notepad. For what a programmer need much strong programming knowledge and syntax.

 

php vs asp

 

php vs asp
php vs asp
php vs asp

PHP is easy to learn.  PHP has a wider community.  PHP is not too true to OOP (Object Oriented Programming).

.NET may be tricky at first for beginners.  .NET is very true to OOP.  .NET community is always up to date with the new trends and changes.

PHP can run on almost all platforms. NET will require windows with .NET framework.

PHP is better for those who feel close to PHP and have worked on it for years.  .NET is better for those who have love for windows.  It’s an individuals liking or choice of the programming language they want to use.  PHP is better in its own world and .NET is better in its own arena.

Tamil Internet FM Radio

 

Tamil Online FM Radio

 

NAGA FM

www.nagafm.com

NAGA FM – Tamil Internet Radio – Since 2006

August 2, 2013

Free Linux Console

The idea behind the website is to help all novice linux users to use the raw linux commands in its core.

www.freelinuxconsole.info

As not all the students or community can buy or host a linux server.

flcscreen

 

Free internet tamil dictionary website was launched with the help of AYYA Pollachi Nasan

 www.freetamildictionary.com

Tamil_language_flag

Howto check battery status?

Open the Terminal and type the following command:

acpi

OR

acpi -i -b

Sample outputs:

Fig.01: Showing acpi battery status on Linux

Use upower command to get battery status

On the latest version of Linux try:

upower -i /org/freedesktop/UPower/devices/battery_BAT0

Sample outputs:

Fig.02: Displaying battery info using upower
September 5, 2012

What is POSIX?

POSIX defines the application programming interface (API), along with Unix command line shells and utility interfaces. This ensure software compatibility with flavors of Unix and other operating systems. The POSIX shell is implemented for many UNIX like operating systems. The POSIX standard is designed to be used by both application programmers and system administrators. Most of the POSIX Shell features are similar to the Korn Shell. The following perating systems are 100% compliant with various POSIX standards:

  • A/UX
  • AIX
  • HP-UX
  • INTEGRITY
  • IRIX
  • OS X
  • QNX
  • Solaris
  • Tru64
  • UnixWare
September 5, 2012

What is UNIX / Linux Korn Shell?

Korn Shell is developed by David Korn at Bell Laboratories

It is upwardly compatible with most Bourne shell features.

It has interactive features like C Shell, but executes faster and has extended inline command editing capability.

The ksh93 version supports associative arrays and built-in floating point arithmetic.

Korn Shell Features

  1. Command history – Yes
  2. Line editing – Yes
  3. File name completion – Yes
  4. Alias command – Yes
  5. Restricted shells – Yes
  6. Job control – Yes

#!/usr/bin/ksh

All shell scripts for the KSH shell start with the first line:

#!/usr/bin/ksh

This is called a shebang, a hashbang, hashpling, or pound bang. The following is a KSH shell script file example:

#!/usr/bin/ksh
echo "Hello World!"

You can find ksh path using which command:
$ which ksh




bt bt bt bt bt bt bt
#

Live Agent

Disconnected
agentdisconnect 00:00:00