October 2, 2018
  1. Login cPanel
  2. Click “Git™ Version Control
  3. Click “Create Repository
  4. Add your Gitlab or Github Repository URL.

*If your repo is hidden, you must create a ssh keygen.

If you want Push Deployment ;

Create deploy.php and add your website.

Enter Gitlab -> Going to repository -> Settings Menu-> Integrations Menu

Add Hook URL : ww.yoursite.com/deploy.php and save.

cPanel background task working with Kubernetes

push deployment that’s it.

Try your free account – gitlab.cobrasoftwares.org

As Singapore works towards turning into a Smart Nation, it is critical to strike a harmony between utilizing Big Data to change the economy and information security. On the business front, information identifying with singular practices and inclinations have converted into an upper hand for some associations. Be that as it may, while numerous associations have perceived the estimation of information as the new fuel for development, not all are very much arranged for the quick advancing information control scene, both locally and over the globe.

As of late, Singapore’s Personal Data Protection Commission (PDPC) proposed a modification to the current Personal Data Protection Act (PDPA), which will expect associations to advise clients of individual information ruptures when they are found. Associations should likewise report the break inside 72 hours. This will add to the current PDPA which contains different tenets administering the gathering, utilize, revelation and care of individual information in Singapore. Quick advances in advances -, for example, the capacity of gadgets to consistently gather and transmit individual information crosswise over systems – exhibit challenges for agree based ways to deal with individual information assurance. It is basic for associations to be careful that the proposed audit will possibly affect their associations on the off chance that they procedure individual information for inward utilize or in the interest of another association.

Received in April 2016, the General Data Protection Regulation (GDPR) expects organizations to secure the individual information and protection of EU natives for exchanges that happen inside EU part states. The new control, which will produce results from May 25, 2018, will incorporate an outline of where and how individual information – including charge card subtle elements, saving money and wellbeing records – is put away and exchanged.

In spite of the fact that GDPR may appear to influence just those living in the EU, neighborhood organizations ought not expel the controls, particularly since Singapore is by a long shot the EU’s biggest business accomplice in Asean, representing around 33% of EU-Asean exchange products and ventures, and approximately 66% of speculations between the two districts.

A current report by Veritas has recognized a steady pattern among neighborhood associations. It proposes that organizations have a predominant measure of ROT (excess, outdated and minor) and dull information put away on premises and in the cloud. On the off chance that left unchecked, business information will superfluously cost associations around the globe a combined US$ 3.3 trillion by 2020.

As indicated by the most recent Veritas think about on GDPR, the greater part of associations in Singapore (56 for each penny) are worried that they won’t have the capacity to meet the new EU necessities, and just 18 for each penny feel they are as of now GDPR-agreeable. Be that as it may, it is urging to take note of that 95 for every penny of the associations here arrangement to drive behaviourial changes through preparing, rewards and contracts to help guarantee that they agree to GDPR approaches.

Despite the disturbing measurements, it is not out of the question to recognize that the greatest test for some associations in Singapore is understanding what information dwells in their unpredictable IT situations, how to shield the information and erase it from the system when asked for or when it’s never again required. Veritas explore likewise demonstrates that a third (34 for each penny) of associations in Singapore don’t have the correct innovation set up to adapt to GDPR. With only a half year to go before the guidelines produce results, associations should hope to build up a plainly characterized administration system with information administration apparatuses at the center.

Similarly as with any new direction, organizations should know about the dangers of indictment and breaking the standards of GDPR, which could bring about tremendous punishments of up to four for each penny of worldwide turnover or 20 million euros (S$32 million), whichever is more prominent. In any case, the seriousness of the inability to go along won’t simply end with these punishments.

Being rebellious to GDPR could possibly devastatingly affect an association’s image picture, particularly if and when a consistence disappointment is made open, conceivably because of the new commitments to inform information ruptures to those influenced. Other unfavorable outcomes incorporate the depreciation of the brand and also the loss of client reliability – which most organizations fear. As per the same Veritas consider on GDPR, 20 for each penny of the organizations overviewed expect that negative media or social scope could make their association lose clients.

To remain GDPR-agreeable, organizations can take after these rules to guarantee that their association is held under tight restraints:


The basic initial phase in consenting to GDPR is picking up an all encompassing comprehension of where all the individual information held by your association is found. Building an information guide of where this data is being put away, who approaches it, to what extent it is being held, and where it is being moved is basic to seeing how your endeavor is preparing and overseeing individual information.


Inhabitants of the EU would now be able to ask for perceivability into the majority of the individual information hung on them by presenting a Subject Access Request (SAR). They can likewise ask for that the information be remedied (if really wrong), ported (to a reasonable fare arrange) or erased. Guaranteeing that your association can attempt and administration these solicitations in an opportune way is basic to maintaining a strategic distance from GDPR punishments.


Information minimisation, one of the fundamental precepts of GDPR, is intended to guarantee that associations decrease the general measure of put away individual information. This is finished by keeping individual information just for the timeframe straightforwardly identified with the first expected reason. Conveying and authorizing maintenance strategies that consequently terminate information after some time would build up the foundation of your GDPR technique.


Under GDPR, associations have a general commitment to execute specialized and hierarchical measures to indicate they have considered and coordinated information assurance into all information accumulation and preparing exercises. Associations may profit by existing warning administrations that are accessible to teach and exchange learning to worldwide legitimate, consistence and protection groups in the matter of how the arrangement can help address the GDPR difficulty.


GDPR requires all associations to report certain kinds of information ruptures to the important supervisory specialist, and now and again to the people influenced. You ought to guarantee that you have abilities set up to screen for conceivable ruptures -, for example, surprising or abnormal document get to designs – and to rapidly trigger announcing techniques.

By following these accepted procedures, organizations would have the capacity to agree to GDPR and different directions, for example, PDPA. Organizations would likewise build up information administration capacities that are more powerful and agreeable than previously. To stay aware of the changing innovation scene, it is more critical than any time in recent memory to have the fitting information administration measures set up, to guarantee that organizations are on the correct side of the law.

March 31, 2018

Warning: Use of undefined constant YOUR_PLUGIN_DIR - assumed 'YOUR_PLUGIN_DIR' (this will throw an Error in a future version of PHP) in /var/www/vhosts/karthik.sg/public_html/blog/wp-content/plugins/gps-tracker/public/class-gpstracker.php on line 166


Service Powered by COBRA GPS Personal Tracker

Q1. What is Jenkins?

My suggestion is to start this answer by giving a definition of Jenkins.

Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

Once you have defined Jenkins give an example, you can refer the below mentioned use case:

  • First, a developer commits the code to the source code repository. Meanwhile, the Jenkins server checks the repository at regular intervals for changes.
  • Soon after a commit occurs, the Jenkins server detects the changes that have occurred in the source code repository. Jenkins will pull those changes and will start preparing a new build.
  • If the build fails, then the concerned team will be notified.
  • If built is successful, then Jenkins deploys the built in the test server.
  • After testing, Jenkins generates a feedback and then notifies the developers about the build and test results.
  • It will continue to check the  source code repository for changes made in the source code and the whole process keeps on repeating.

Jenkins Architecture - DevOps Interview Questions Jenkins - Edureka

Interviewer now knows what is Jenkins but why we use it, there are many other CI tools as well, so why Jenkins?, the next question in this Jenkins interview questions will deal with that answer.

Q2. What are the benefits of using Jenkins?

I will suggest you to include the following benefits of Jenkins, if you can recall any other benefit apart from the below mentioned points you can include that as well.

  • At integration stage, build failures are cached.
  • For each change in the source code an automatic build report notification is generated.
  • To notify developers about build report success or failure, it is integrated with LDAP mail server.
  • Achieves continuous integration agile development and test driven development.
  • With simple steps, maven release project is automated.
  • Easy tracking of bugs at early stage in development environment than production.

Interviewer: Okay Jenkins looks like a really cool tool, but what are the requirements for using Jenkins?

Q3. What are the pre-requisites for using Jenkins?

Answer to this is pretty straightforward To use Jenkins you require:

  • A source code repository which is accessible, for instance, a Git repository.
  • A working build script, e.g., a Maven script, checked into the repository.

Remember, you have mentioned Plugins in your previous answer, so next question in this Jenkins interview questions blog will be regarding Plugins.

Q4. Mention some of the useful plugins in Jenkins?

Below I have mentioned some important Plugins:

  • Maven 2 project
  • Git
  • Amazon EC2
  • HTML publisher
  • Copy artifact
  • Join
  • Green Balls

Jenkins Plugins - Jenkins Interview Questions - Edureka

These Plugins I feel are the most useful plugins, if you want to include any other Plugin that is not mentioned above, you can add that as well, but make sure you first mention the above stated plugins and then add your own.

Q5. Mention what are the commands you can use to start Jenkins manually?

For this answer I will suggest you to go with the below mentioned flow:

To start Jenkins manually open Console/Command line, then go to your Jenkins installation directory. Over there you can use the below commands:

To start Jenkins: jenkins.exe start
To stop Jenkins: jenkins.exe stop
To restart Jenkins: jenkins.exe restart

Q6. Explain how you can set up Jenkins job?

My approach to this answer will be to first mention how to create Jenkins job.

Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”.

Now you can tell the elements of this freestyle job:

  • Optional SCM, such as CVS or Subversion where your source code resides.
  • Optional triggers to control when Jenkins will perform builds.
  • Some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens.
  • Optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results.
  • Optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc..

Q7. Explain how to create a backup and copy files in Jenkins?

Answer to this question is really direct.

To create a backup all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory. You can also copy a job directory to clone or replicate a job or rename the directory.

Q8. How will you secure Jenkins?

The way I secure Jenkins is mentioned below, if you have any other way to do it than mention that:

  • Ensure global security is on.
  • Ensure that Jenkins is integrated with my company’s user directory with appropriate plugin.
  • Ensure that matrix/Project matrix is enabled to fine tune access.
  • Automate the process of setting rights/privileges in Jenkins with custom version controlled script.
  • Limit physical access to Jenkins data/folders.
  • Periodically run security audits on same.

I hope you have enjoyed the above set of Jenkins interview questions, the next set of questions will be more challenging, so be prepared.

Q9 Explain how you can deploy a custom build of a core plugin?

Below are the steps to deploy a custom build of a core plugin:

  • Stop Jenkins.
  • Copy the custom HPI to $Jenkins_Home/plugins.
  • Delete the previously expanded plugin directory.
  • Make an empty file called <plugin>.hpi.pinned.
  • Start Jenkins.

Q10. What is the relation between Hudson and Jenkins?

You can just say Hudson was the earlier name and version of current Jenkins. After some issue, the project name was changed from Hudson to Jenkins.

Q11. What you do when you see a broken build for your project in Jenkins?

There can be multiple answers to this question I will approach this task in the following way:

I will open the console output for the broken build and try to see if any file changes were missed. If I am unable to find the issue that way, then I will clean and update my local workspace to replicate the problem on my local and try to solve it.

If you do it in a different way then just mention that in your answer.

Q12. Explain how you can move or copy Jenkins from one server to another?

I will approach this task by copying the jobs directory from the old server to the new one. There are multiple ways to do that, I have mentioned it below:

You can:

  • Move a job from one installation of Jenkins to another by simply copying the corresponding job directory.
  • Make a copy of an existing job by making a clone of a job directory by a different name.
  • Rename an existing job by renaming a directory. Note that if you change a job name you will need to change any other job that tries to call the renamed job.

Q13. What are the various ways in which build can be scheduled in Jenkins?

You can schedule a build in Jenkins in the following ways:

  • By source code management commits
  • After completion of other builds
  • Can be scheduled to run at specified time ( crons )
  • Manual Build Requests

Q14. What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration tool.

Q15. Which SCM tools Jenkins supports?

Below are Source code management tools supported by Jenkins:

  • AccuRev
  • CVS,
  • Subversion,
  • Git,
  • Mercurial,
  • Perforce,
  • Clearcase
  • RTC

Now, the next set of Jenkins interview questions will test your experience with Jenkins.

Q16. What are the two components Jenkins is mainly integrated with?

According to me Jenkins is mainly integrated with the following:

  • Version Control system like GIT,SVN.
  • Build tools like Apache Maven.

Jenkins Interview Questions # 1) What is Jenkins?

Answer # Jenkins is an open source automation server. Jenkins ia a continuous integration tool developed in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery.


Jenkins Interview Questions # 2) Why do we use Jenkins?

Answer # Jenkins is an open-source continuous integration software tool written in the Java programming language for testing and reporting on isolated changes in a larger code base in real time. The Jenkins softwareenables developers to find and solve defects in a code base rapidly and to automate testing of their builds.

Jenkins Interview Questions # 3) What is Maven and what is Jenkins?

Answer # Maven is a build tool, in short a successor of ant. It helps in build and version control. However, Jenkins is continuous integration system, where in maven is used for build. Jenkins can be used to automate the deployment process.


Jenkins Interview Question # 4) What is the difference between Hudson and Jenkins?

Answer # Jenkins is the new Hudson. It really is more like a rename, not a fork, since the whole development community moved to Jenkins. (Oracle is left sitting in a corner holding their old ball “Hudson“, but it’s just a soul-less project now.). In a nutshell Jenkins CI is the leading open-source continuous integration server.


Jenkins Interview Questions # 5) What is meant by continuous integration in Jenkins?

Answer # Continuous integration is a process in which all development work is integrated as early as possible. The resulting artifacts are automatically created and tested. This process allows to identify errors as early as possible. Jenkins is a popular open source tool to perform continuous integration and build automation.


Jenkins Interview Questions # 6) Why do we use Jenkins with selenium?

Answer # Running Selenium tests in Jenkins allows you to run your tests every time your software changes and deploy the software to a new environment when the tests pass. Jenkins can schedule your tests to run at specific time.


Jenkins Interview Questions # 7) What are CI Tools?

Answer # Here is the list of the top 8 Continuous Integration tools:

  • Jenkins
  • TeamCity
  • Travis CI
  • Go CD
  • Bamboo
  • GitLab CI
  • CircleCI
  • Codeship


Jenkins Interview Questions # 8) What is a CI CD pipeline?

Answer # A continuous integration and deployment pipeline (CD/CI) is such an important aspect of a software project. It saves a ton of manual, error-prone deployment work. It results in higher quality software for continuous integration, automated tests, and code metrics.


Jenkins Interview Questions # 9) What is build pipeline in Jenkins?

Answer # Job chaining in Jenkins is the process of automatically starting other job(s) after the execution of a job. This approach lets you build multi-step build pipelines or trigger the rebuild of a project if one of its dependencies is updated.


Jenkins Interview Questions # 10) What is a Jenkins pipeline?

Answer # The Jenkins Pipeline plugin is a game changer for Jenkins users. Based on a Domain Specific Language (DSL) in Groovy, the Pipeline plugin makes pipelines scriptable and it is an incredibly powerful way to develop complex, multi-step DevOps pipelines.

Jenkins Interview Questions And Answers For Experienced

Jenkins Interview Questions # 11) What is a DSL Jenkins?

Answer # The Jenkins “Job DSL / Plugin” is made up of two parts: The Domain Specific Language (DSL) itself that allows users to describe jobs using a Groovy-based language, and a Jenkins plugin which manages the scripts and the updating of the Jenkins jobs which are created and maintained as a result.


Jenkins Interview Questions # 12) What is continuous integration and deployment?

Answer # Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.


Jenkins Interview Questions # 13) What is the tool used for provisioning and configuration?

Answer # Ansible is an agent-less configuration management as well as orchestration tool. In Ansible, the configuration modules are called “Playbooks”. Like other tools, Ansible can be used for cloud provisioning.


Jenkins Interview Questions # 14) What is the difference between Maven, Ant and Jenkins?

Answer # Maven and ANT are build tool but main difference is that maven also provides dependency management, standard project layout and project management. On difference between Maven, ANT and Jenkins, later is a continuous integration tool which is much more than build tool.


Jenkins Interview Questions # 15) Which SCM tools Jenkins supports?

Answer # Jenkins supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands.

Jenkins Interview Questions For Testers

Jenkins Interview Questions # 16) How schedule a build in Jenkins?

Answer # In Jenkins, under the job configuration we can define various build triggers. Simple find the ‘Build Triggers’ section, and check the ‘ Build Periodically’ checkbox. With the periodically build you can schedule the build definition by the date or day of the week and the time to execute the build.

The format of the ‘Schedule’ textbox is as follows:

MINUTE (0-59), HOUR (0-23), DAY (1-31), MONTH (1-12), DAY OF THE WEEK (0-7)


Jenkins Interview Questions # 17) Why do we use Pipelines in Jenkins?

Answer # Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from simple continuous integration to comprehensive continuous delivery pipelines. By modeling a series of related tasks, users can take advantage of the many features of Pipeline:

  • Code: Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.
  • Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins master.
  • Pausable: Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run.
  • Versatile: Pipelines support complex real-world continuous delivery requirements, including the ability to fork/join, loop, and perform work in parallel.
  • Extensible: The Pipeline plugin supports custom extensions to its DSL and multiple options for integration with other plugins.

Jenkins Interview Questions # 18) What is a Jenkinsfile?

Answer # A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.

Creating a Jenkinsfile, which is checked into source control, provides a number of immediate benefits:

  1. Code review/iteration on the Pipeline
  2. Audit trail for the Pipeline
  3. Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.

Jenkins Interview Questions # 19) How do you create Multibranch Pipeline in Jenkins?

Answer # The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.


Jenkins Interview Questions # 20) What is blue ocean in Jenkins?

Answer # Blue Ocean is a project that rethinks the user experience of Jenkins, modelling and presenting the process of software delivery by surfacing information that’s important to development teams with as few clicks as possible, while still staying true to the extensibility that is core to Jenkins.


Jenkins Interview Questions # 21) What are the important plugins in Jenkins?

Answers # Here is the list of some important Plugins in Jenkins:

  1. Maven 2 project
  2. Git
  3. Amazon EC2
  4. HTML publisher
  5. Copy artifact
  6. Join
  7. Green Balls


Jenkins Interview Questions # 22) What are Jobs in Jenkins?

Answer # Jenkins can be used to perform the typical build server work, such as doing continuous/official/nightly builds, run tests, or perform some repetitive batch tasks. This is called “free-style software project” in Jenkins.


Jenkins Interview Questions # 23) How do you create a Job in Jenkins?

Answer # Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”. This job type consists of the following elements:

optional SCM, such as CVS or Subversion where your source code resides.
optional triggers to control when Jenkins will perform builds.

some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results.

optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc.


Jenkins Interview Questions # 24) How do you configuring automatic builds in Jenkins?

Answer # Builds in Jenkins can be triggered periodically (on a schedule, specified in configuration), or when source changes in the project have been detected, or they can be automatically triggered by requesting the URL:



Jenkins Interview Questions # 25) How to create a backup and copy files in Jenkins?

Answer # To create a backup, all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory.

Q1. What is Puppet?

I will advise you to first give a small definition of Puppet. Puppet is a Configuration Management tool which is used to automate administration tasks.

Now, you should describe how Puppet Master and Agent communicates.

Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave as shown on the diagram below. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave.

Refer the diagram below that explains the above description:


Q2. How Puppet Works?

For this question just explain Puppet Architecture. Refer the diagram below:

Puppet Master Slave Architecture - Puppet Interview Questions - Edureka

The following functions are performed in the above image:

  • The Puppet Agent sends the Facts to the Puppet Master. Facts are basically key/value data pair that represents some aspect of Slave state, such as its IP address, up-time, operating system, or whether it’s a virtual machine. I will explain Facts in detail later in the blog.
  • Puppet Master uses the facts to compile a Catalog that defines how the Slave should be configured. Catalogis a document that describes the desired state for each resource that Puppet Master manages on a Slave. I will explain catalogs and resources in detail later.
  • Puppet Slave reports back to Master indicating that Configuration is complete, which is visible in the Puppet dashboard.

Now the interviewer might dig in deep, so the next set of Puppet interview questions will test your knowledge about various components of Puppet.

Q3. What are Puppet Manifests?

It is a very important question and just make sure you go in a correct flow according to me you should first define Manifests.

Every node (or Puppet Agent) has got its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. Manifests are composed of Puppet code and their filenames use the .pp extension.

Now give an example, you can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master. 

Q4. What is Puppet Module and How it is different from Puppet Manifest?

For this answer I will prefer the below mentioned explanation:

A Puppet Module is a collection of Manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple Manifests. It is considered best practice to use Modules to organize almost all of your Puppet Manifests.

Puppet programs are called Manifests. Manifests are composed of Puppet code and their file names use the .pp extension. 

Q5What is Facter in Puppet?

You are expected to answer what exactly Facter does in Puppet so, according to me you should start by explaining:

Facter is basically a library that discovers and reports the per-Agent facts to the Puppet Master such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master’s Manifests as variables.  

Q6. What is Puppet Catalog?

I will suggest you to first, tell the uses of Puppet Catalog.

When configuring a node, Puppet Agent uses a document called a catalog, which it downloads from a Puppet Master. The catalog describes the desired state for each resource that should be managed, and may specify dependency information for resources that should be managed in a certain order.

If your interviewer wants to know more about it mention the below points:

Puppet compiles a catalog using three main sources of configuration info:

  • Agent-provided data
  • External data
  • Puppet manifests

Q7. What size organizations should use Puppet?

There is no minimum or maximum organization size that can benefit from Puppet, but there are sizes that are more likely to benefit. Organizations with only a handful of servers are unlikely to consider maintaining those servers to be a real problem, Organizations with many servers are more likely to find, difficult to manage those servers manually so using Puppet is more beneficial for those organizations.


Q: – What is Module and How it is different from Manifest ?

Whatever the manifests we defined in modules, can call or include into other manifests. Which makes easier management of Manifests.It helps you to push specific manifests on specific Node or Agent.

Q: – Command to check requests of Certificates ?

puppetca –list (2.6)
puppet ca list (3.0)

Q: – Command to sign Requested Certificates

puppetca  –sign hostname-of-agent (2.6)
puppet ca  sign hostname-of-agent (3.0)

Q: – Where Puppet Master Stores Certificates


Q: – What is Facter ?

Sometime you need to write manifests on conditional experession based on agent specific data which is available through Facter. Facter provides information like Kernel version,Dist release, IP Address, CPU info and etc.You can defined your facter also.

Q: – What is the use of etckeeper-commit-post and etckeeper-commit-pre on Puppet Agent ?

etckeeper-commit-post: In this configuration file you can define command and scripts which executes after pushing configuration on Agent
Etckeeper-commit-pre: In this configuration file you can define command and scripts which executes before pushing configuration on Agent

Q: – What is Puppet Kick ?

By default Puppet Agent request to Puppet Master after a periodic time which known as “runinterval”. Puppet Kick is a utility which allows you to trigger Puppet Agent from Puppet Master.

Q: – What is MCollective ?

MCollective is a powerful orchestration framework. Run actions on thousands of servers simultaneously, using existing plugins or writing your own.

Q. Describe the most significant gain you made from automating a process through Puppet?
“I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles paradigm and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they’ve been improved by my teammates and members of the community.”

Q. Tell me about a time when you used collaboration and Puppet to help resolve a conflict within a team?
The development team wanted root access on test machines managed by Puppet in order to make specific configuration changes. We responded by meeting with them weekly to agree on a process for developers to communicate configuration changes and to empower them to make many of the changes they needed. Through our joint efforts, we came up with a way for the developers to change specific configuration values themselves via data abstracted through Hiera. In fact, we even taught one of the developers how to write Puppet code in collaboration with us.”

Q. Which open source or community tools do you use to make Puppet more powerful?
Changes and requests are ticketed through Jira and we manage requests through an internal process. Then, we use Git and Puppet’s Code Manager app to manage Puppet code in accordance with best practices. Additionally, we run all of our Puppet changes through our continuous integration pipeline in Jenkins using the beaker testing framework.”


Q.What is the use of Virtual Resources in Puppet

First you need to define Virtual Resource.

Virtual Resources specifies a desired state for a resource without necessarily enforcing that state. Although virtual resources can only be declared once, they can be realized any number of times.

I will suggest you to mention the uses of Virtual Resources as well:

  • Resources whose management depends on at least one of multiple conditions being met.
  • Overlapping sets of resources which might be needed by any number of classes.
  • Resources which should only be managed if multiple cross-class conditions are met.

Q. Can I access environment variables with Facter in Puppet?

I will suggest you to start this answer by saying:

Not directly. However, Facter reads in custom facts from a special subset of environment variables. Any environment variable with a prefix of FACTER_ will be converted into a fact when Facter runs.

Now explain the interviewer with an example:

$ FACTER_FOO=”bar”
 $ export FACTER_FOO</span>
 $ facter | grep  ‘foo’</span>
   foo => bar

The value of the FACTER_FOO environment variable would now be available in your Puppet manifests as $foo, and would have a value of ‘bar’. Using shell scripting to export an arbitrary subset of environment variables as facts is left as an exercise for the reader.

Q. Tell me about a time when you used collaboration and Puppet to help resolve a conflict within a team?

Explain them about your past experience of Puppet and how it was useful to resolve conflicts, you can refer the below mention example:

The development team wanted root access on test machines managed by Puppet in order to make specific configuration changes. We responded by meeting with them weekly to agree on a process for developers to communicate configuration changes and to empower them to make many of the changes they needed. Through our joint efforts, we came up with a way for the developers to change specific configuration values themselves via data abstracted through Hiera. In fact, we even taught one of the developers how to write Puppet code in collaboration with us.

1) DevOps ! How can you define it in your words ?

Its highly effective daily collaboration between software developers and IT operations / web operation engineers to produce a working system or release software.

A devOps implementation is generally aligned with Agile methodologies where deploying working software to Production is generally the highest priority. On Agile implementations, emphasis is placed on people over processes, so a DevOps engineer must be willing to work very closely with Agile development teams to ensure they have an environment necessary to support functions such as automated testing, continuous Integration and continuous Delivery. On a traditional implementation, without DevOps, the operations team is often isolated from developers, often working under a help desk model under general service level agreements where the system operations team treats developers as a customer. This is a proven model which obviously can work very well, but in a DevOps environment, development and operations are streamlined and barriers between the two groups should not exist.

2) Why we need DevOps ?

Companies are now facing the need to delivering more and faster and better applications to meet the ever more pressing demands of conscious users to reduce the ” Time To Market “. Devops often helps deployment to happen very fast.

3) What is agile development and Scrum ?

Agile development used as an alternative to Waterfall development practice. In Agile, the development process is more iterative and incremental, there is more testing and feedback at every stage of development as opposed to only the last stage in Waterfall.

Scrum is used to manage complex software and product development, using iterative and incremental practices. Scrum has three roles ie product owner, scrum master, and team.

4) Can we consider DevOps as an agile methodology ?

Of course! DevOps is a movement to reconcile and synchronize development and production start through a set of good practices . Its emergence is motivated by a deep changing demands of business, who want to speed up the changes to stick closer to the requirements of business and the customer.

5) What is DevOps engineer’s duty with regards to Agile development ?

DevOps engineer work very closely with Agile development teams to ensure they have an environment necessary to support functions such as automated testing, continuous Integration and continuous Delivery. DevOps engineer must be in constant contact with the developers and make all required parts of environment work seamlessly.

Technical Questions

6) Have you worked on  containers ? 

Containers are form of lightweight virtualization, more heavy than chroot but lighter than hypervisors. They provide isolation among processes while using same kernel as the host machine, and cgroups functionality within kernel. But container formats differ among themselves in a way that some provide more VM-like experience while other containerize only application.

LXC containers are most VM-like and most heavy weight, while Docker used to be more light weight and was initially designed for single application container. But in more recent releases Docker introduced whole machine containerization features so now Docker can be used both ways. There is also rkt from CoreOS and LXD from Canonical, which builds upon LXC.

7) What is Kubernetes? Explain

It is massively scalable tool for managing containers, made by Google. It is used internally on huge deployments and because of that it is maybe the best option for production use of containers. It supports self healing by restating non responsive containers, it pack containers in a way that they take less resources and has many other great features.

8) What is the function of CI (Continuous Integration) server ? 

CI server function is to continuously integrate all changes being made and committed to repository by different developers and check for compile errors. It needs to build code several times a day, preferably after every commit so it can detect which commit made the breakage if the breakage happens.

Note: Other available and popular CI tools are  Jenkins, TeamCity, CircleCI , Hudson, Buildbot etc

9) What is Continuous Delivery ?

Is it practice of delivering the software for testing as soon as it is build by CI (Continuous Integration) server’s. It requires heavy use of Versioning Control System for so always available to developers and testers alike.

10) What is Vagrant and what is it used for ?

Vagrant is a tool that can create and manage virtualized (or containerized)  environments for testing and developing software. At first, Vagrant used virtualbox as the hypervisor for virtual environments, but now it supports also KVM.

11) Do you ever used any scripting language ? 

As far as scripting languages go, the simpler the better. In fact, the language itself isn’t as important as understanding design patterns and development paradigms such as procedural, object-oriented, or functional programming.

Currently, several scripting languages are available so the question arises : what is the most appropriate language for DevOps approach?  Simply everything , it depends on the context of the project and tools used for example if Ansible used its good have knowledge in Python  and if its for Chef its on Ruby.

12) What is the role of a configuration management tool in devops ?

Automation plays an essential role in server configuration management. For that purpose we use CM tools , they store information about versions and builds of the software and testware and provide the traceability between software and testware.

13) What is the purpose of CM tools and which one you have used ?

Configuration Management tools’ purpose is to automatize deployment and configuration of software on big number of servers. Most CM tools usually use agent architecture which means that every machine being manged needs to have agent installed. My favorite tool is one that uses agentless architecture – Ansible. It only requires SSH and Python. And if raw module is being used, not even Python is required because it can run raw bash commands. Other available and popular CM tools are Puppet, Chef, SaltStack.

14) What is OpenStack ?

OpenStack is often called Cloud Operating System, and that is not far from the truth. It is the complete environment for deploying IaaS which gives you possibility of making your own cloud similar to AWS. It is highly modular and consists of many sub-projects so you can pick and chose which functionality you need. OpenStack distribution are available from Red Hat, Mirantis, HPE, Oracle, Canonical and many others. It is completely open source project but some vendors make proprietary distributions.

15) Classify Cloud Platforms anategory ?

Cloud Computing software can be classified as Software as a Service or SaaS, Infrastructure as a Service or IaaS and Platform as a Service or PaaS.

SaaS is peace of software that runs over network on remote server and has only user interface exposed to users, usually in web browser. For example salesforce.com.

Infrastructure as a service is a cloud environment that exposes VM to user to use as entire OS or container where you could install anything you would install on your server. Example for this would be OpenStack, AWS, Eucalyptus.
PaaS allows users to deploy their own application on the preinstalled platform, usually framework of application server and suite of developer tools. Examples for this would be OpenShHeroku.

16) What are easiest ways to build a small cloud ?

VMfest is one one of the options for making IaaS cloud from VirtualBox VMs in no time. If you want a lightweight PaaS there is Dokku which is basically a bash script that makes PaaS out of Dokku containers.

17) What is AWS (Amazon Web Services)? Did got chance to work on Amazon tools ?

AWS provides a set of flexible services designed to enable companies to create and deliver products with greater speed and reliability using AWS and DevOps practices . These services simplify commissioning and infrastructure management , application code deployment , automated software release process and monitoring of the application and infrastructure performance. Amazon used tools like AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline etc, that helps to make devops easier.

18) What is EC2 ?

Amazon EC2 Container Service (ECS) is a highly scalable container management service and high performance that supports the Docker containers and allows you to easily run applications on a cluster managed by Amazon EC2 instances.

The EC2 service is inseparable from the concept of Amazon Machine Image – AMI . The May is Indeed the image of a virtual machine That Will Be Executed . EC2 based on XEN virtualization , that’s why it is quite easy to move XEN servers to EC2 .

19) Do you find any advantage of using NoSQL database over RDBMS ?

Typical web applications are built with a three-tier architecture. To carry the load, more Web servers are simply added behind a load balancer to support more users. The ability to scale out is a key principle in the world of cloud computing, more and more important in which VM instances can be easily added or removed to meet demand.

However, when it comes to the data layer, relational databases (RDBMS) does not allow a passage to the simple scale and do not provide a flexible data model. Manage more users means adding more servers and large servers are very complex, owners and disproportionately expensive, in contrast to low-cost hardware, the “commodity hardware”, architectures in the cloud. Organizations are beginning to see performance issues with their relational databases for existing or new applications. Especially as the number of users increases, they realize the need for a faster and more flexible basis. This is the time to begin to assess and adopt NoSQL database like in their Web applications.

20) What are the main SQL migration difficulties NoSQL ?

Each record in a relational database according to a schema – with a fixed number of fields (columns) each having a specified object and a data type. Each record is the same. The data is denormalized in several tables. The advantage is that there is less of duplicate data in the database. The downside is that a change in the pattern means performing several “alter table” that require expensive to lock multiple tables simultaneously to ensure that change does not leave the database in an inconsistent state.

With databases data, on the other hand, each document can have a completely different structure from other documents. No additional management is required on the database to manage changes in the schemes.

21) What are the benefits of NoSQL databases Documents ?

The main advantages of document databases are the following :

  • flexible data model data can be inserted without a defined schema and format of the data that is inserted can change at any time , providing extreme flexibility , which ultimately allows a significant agility to business
  • Consistent , high-performance Advanced NoSQL database technologies are putting cache data , transparently, in system memory ; a behavior that is completely transparent to the developer and the team in charge of operations .
  • Some easy scalability NoSQL databases automatically propagate data between servers , requiring no participation applications. Servers can be added and removed without disruption to applications , with data and I/O spread across multiple servers.

22 ) What are the main advantages of Git over CVS ?

The biggest advantage is that Git is distributed while CVS is centralised. Changes in CVS are per file, while changes (commits) in Git they always refer to the whole project. Git offers much more tools than CVS.

23) Difference between containers and virtual machines ?

Each VM instantiation requires starting a full OS. VMs take up a lot of system resources. This quickly adds up to a lot of RAM and CPU cycles. Container host uses the process and file system isolation features of the linux kernel.

24)  What is CoreOS, and what are alternatives ?

CoreOS is striped down linux distribution meant for running containters, mainly with its own rkt format but others are also supported. It was initially based on ChromeOS and supported Docker. The alternatives to this are canonical’s ubuntu snappy or red hat enterprise linux atomic host. Of course, Containers can also be ran on regular Linux system.

25)  What is Kickstart ?

It is a way to install Red Hat based systems by automated way. During manual install process, Anaconda installer creates file anaconda-ks.cfg which then can be used with system-config-kickstart tool to install same configuration automatically on multiple systems.

26) What are tools for network monitoring? List few

For example, Nagios, Icinga 2, OpenNMS, Splunk and Wireshark. Those tools are used to monitor network traffic, network quality and detect network problems even before they arise. Of those listed, only Splunk is proprietary other are open source.

27) What is Juju ?

Juju is orchestration tool primarily for ubuntu for management, provision and configuration on Ubuntu systems. It is was initially written in Python and since have been rewritten in Go.

28) Give me an examples of how you would handle projects ?

As a DevOps engineer, I would demonstrate a clear understanding of DevOps project management tactics and also work with teams to set objectives, streamline workflow, maintain scope, research and introduce new tools or frameworks, translate requirements into workflow and follow up. I would resort to CI, release management and other tools to keep interdisciplinary projects on track.

29) What is post mortem meetings ?

It is a meeting where we discuss what went wrong and what steps should be taken so that failure doesn’t happen again. Post mortem meetings are not about finding the one to be blamed, they are for preventing outages from reoccurring and planing redesign of the infrastructure so that downtime can be minimised. It is about learning from mistakes.

30) What you know about serverless model ?

Serverless refers to a model where the existence of servers is hidden from developers. It means you no longer have to deal with capacity, deployments, scaling and fault tolerance and OS. It will essentially reducing maintenance efforts and allow developers to quickly focus on developing codes.

Examples are Amazon AWS Lambda and Auth0 serveless platform.

Devops Example : Deploying Applications with Ansible

Ansible is a lightweight, extensible solution for automating your application provisioning. Ansible has no dependencies other than Python and SSH. It doesn’t require any agents to be set up on the remote hosts and it doesn’t leave any traces after it runs either. It allows you to significantly simplify our operations by creating easy YAML based playbooks. It’s good for configuration automation, deployments and orchestration.

Components of Ansible

Playbooks : Ansible playbooks are a way to send commands to remote computers in a scripted way. Instead of using Ansible commands individually to remotely configure computers from the command line, you can configure entire complex environments by passing a script to one or more systems.

Ansible playbooks are written in the YAML data serialization format. If you don’t know what a data serialization format is, think of it as a way to translate a programmatic data structure (lists, arrays, dictionaries, etc) into a format that can be easily stored to disk. The file can then be used to recreate the structure at a later point. JSON is another popular data serialization format, but YAML is much easier to read.

Let’s look at a basic playbook that allow us to install a web application (nginx) in a multiple hosts :

hosts: webservers
– name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
– start nginx

– name: start nginx
service: name=nginx state=started

The hosts file : (by default under /etc/ansible/hosts) this is the Ansible Inventory file, and it stores the hosts, and their mappings to the host groups (webservers ,databases etc)

# example of setting a host inventory by IP address.
# also demonstrates how to set per-host variables.[repository_servers] example-repository
#example of setting a host by hostname. Requires local lookup in /etc/hosts
# or DNS.
[dbservers] db01

The SSH key : For the first run, we’ll need to tell ansible the SSH and Sudo passwords, because one of the thing that the common role does is to configure passwordless sudo, and deploy a SSH key. So in this case ansible can execute the playbook’s commands in the remote nodes (hosts ) and deploy the web application nginx.


Those are some of the questions you might encounter during the interview but when learning about DevOps concepts you by no means should only concentrate on those read everything and anything related to Linux and open source and try any software you might be of any use to you. This article hopefully gives idea where to start. Thank you for reading.

December 3, 2017
  • Author: Karthik
  • Category: Family
December 3, 2017
  • Author: Karthik
  • Category: Family
December 3, 2017
  • Author: Karthik
  • Category: Family
December 3, 2017
  • Author: Karthik
  • Category: Family