I will be speaking at Jenkins World 2016, in Santa Clara, CA in September.
https://www.cloudbees.com/juc/event-details
https://www.cloudbees.com/jenkinsworld/home
https://www.cloudbees.com/pipelining-devops-jenkins-and-aws
Saturday, May 21, 2016
Thursday, May 12, 2016
Speaking at RVA Java Users Meetup (RJUG) on 5/18/16 at VCU
Meetup Link: http://www.meetup.com/Richmond-Java-Users-Group/events/230924145/
I will be speaking about using the AWS Java SDK to apply governance processes.
Location:VCU Engineering, East Hall room E1232
601 W Main St #331, Richmond, VA
"As organizations clamor to reach the cloud, with their infrastructure, data, and applications, Cloud Governance becomes an operational imperative. Amazon Web Services (AWS) equips DevOps practitioners with the tools needed to automate governance controls. We will present an example of using AWS CloudTrail, CloudWatch, Lambda, and Dynamo to automate Cloud Governance around the AWS Simple Storage Service (S3) operations, with the AWS Java SDK."
I will be speaking about using the AWS Java SDK to apply governance processes.
Location:VCU Engineering, East Hall room E1232
601 W Main St #331, Richmond, VA
"As organizations clamor to reach the cloud, with their infrastructure, data, and applications, Cloud Governance becomes an operational imperative. Amazon Web Services (AWS) equips DevOps practitioners with the tools needed to automate governance controls. We will present an example of using AWS CloudTrail, CloudWatch, Lambda, and Dynamo to automate Cloud Governance around the AWS Simple Storage Service (S3) operations, with the AWS Java SDK."
Jenkins Consul K/V Builder Plugin Released
Do you use HashiCorp’s Consul software for Service Discovery or Key/Value configuration management? One of the core features of Consul is the ability to store Key/Value data, and allow applications to retrieve said data for configuration-lookup or service discovery needs. And, as a Jenkins user, I routinely need to lookup configurations and set environment variables for build execution. The Consul K/V Builder Plugin allows me to do just that. With this plugin I can read, write, and delete Key/Value data from and to the Consul servers or clusters, even if local Access Control List (ACL) security is enabled in the Consul software.
The plugin is written in Java with the Jenkins Open Source plugin framework (https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial) and libraries. It is a Maven project, and the source is found here: https://github.com/jenkinsci/consul-kv-builder-plugin
The plugin Wiki page is here: https://wiki.jenkins-ci.org/display/JENKINS/Consul-KV-Builder+Plugin
The plugin is written in Java with the Jenkins Open Source plugin framework (https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial) and libraries. It is a Maven project, and the source is found here: https://github.com/jenkinsci/consul-kv-builder-plugin
The plugin Wiki page is here: https://wiki.jenkins-ci.org/display/JENKINS/Consul-KV-Builder+Plugin
Friday, April 1, 2016
Abandon All Servers Ye Who Enter Here
So, you still have on-premise servers, and you want to move to the cloud. Or perhaps, you have moved to the cloud, but you have kept your server-centric mindset, and used Infrastructure-as-a-Service (IaaS) to build your servers in the cloud, directly mimicking what you had on premises.
You are doing in incorrectly, and you are not helping your organization as much as you think. Sure, there might be a modicum of cost-savings related to moving to the cloud. However, maintaining a server-centric mindset, while consuming cloud resources is an anti-pattern.
Server Centricity
How do you know if you are still trapped in server-centricity? Let's take a short quiz:
Question #1: Do you name your servers?
Question #2: When clients must integrate to your applications, do they need to know the name of your servers, or worse yet, their IP addresses?
Question #3: Do you even have cloud resources that you label as "servers"?
Question #4: When the servers fail, how do you respond? Do you manually build a new server and manually reconnect clients? Even if you have a hot/warm/cold standby server, was it built with at least some manual intervention?
Question #5: When you need additional capacity, do you manually scale horizontally, or vertically?
If you answered yes to any of these questions, your cloud usage is still immature. How immature is relative, and a matter of opinion, but most would agree that manual intervention should be minimal, and a last resort.
Abandon Your Servers
To use the cloud effectively and move towards maturity, your organization must lessen, and eventually remove, the importance of individual servers. Servers are fleeting; applications are more important. In the realm of IaaS, instances and/or containers replace servers. Instances, and even more so, containers, are designed to be volatile. In fact, with proper automation in place, instances and containers come and go with little or no impact to applications. Applications stick around, underpinned by instances, containers, and automation. Availability and partition tolerance are easily achieved with proper automation and design.
Platform-as-a-Service (PaaS), underpinned by IaaS, is even more application centric, and relies on automation even more so than IaaS. In a properly designed PaaS implementation, automation allows the end users, the application owners, to place their applications into the cloud without having to build the instances and/or containers. PaaS users either supply the deployment artifacts, or use PaaS Ci/CD services to build and deploy the artifacts. In fact, for PaaS subscribers, the term "environment" takes on new meaning; it is the intersection of code and logical application definition, controlled by CI/CD processes and automation. They don't care where their applications run, just as long as they run and their users can successfully use them to complete their respective tasks.
Automation and Indirection
Automation is in place to abstract the need for PaaS users to build and maintain IaaS resources. There is also a layer of indirection that exists between the applications and the underlying infrastructure. With PaaS, application owners never need to worry about that underlying infrastructure; instead, they focus on code and application definitions. Overtime, this places them in the same category with application users, or even SaaS subscribers.
This PaaS-like abstraction and layer of indirection should also be the goal of cloud-enablement teams that are using IaaS resources to deliver services to application teams. With automation, and well-defined practices, and well-designed stacks, cloud-enablement teams are now able to deliver more self-service resources with IaaS. This self-service allows development and deployment teams to consume IaaS similarly to PaaS users. With proper automation, there is little to no manual configuration or intervention needed by the application teams.
Where are the Servers?
And, where are the servers? They are forgotten, replaced by instances, containers, and stacks, whose count shrinks and grows with the needs of the individual teams consuming them. Configuration-as-code and automation, for resource formation and autoscaling have reduced the need for manual intervention.
So, if you really want to enter the cloud and be successful, be prepared to abandon your servers.
You are doing in incorrectly, and you are not helping your organization as much as you think. Sure, there might be a modicum of cost-savings related to moving to the cloud. However, maintaining a server-centric mindset, while consuming cloud resources is an anti-pattern.
Server Centricity
How do you know if you are still trapped in server-centricity? Let's take a short quiz:
Question #1: Do you name your servers?
Question #2: When clients must integrate to your applications, do they need to know the name of your servers, or worse yet, their IP addresses?
Question #3: Do you even have cloud resources that you label as "servers"?
Question #4: When the servers fail, how do you respond? Do you manually build a new server and manually reconnect clients? Even if you have a hot/warm/cold standby server, was it built with at least some manual intervention?
Question #5: When you need additional capacity, do you manually scale horizontally, or vertically?
If you answered yes to any of these questions, your cloud usage is still immature. How immature is relative, and a matter of opinion, but most would agree that manual intervention should be minimal, and a last resort.
Abandon Your Servers
To use the cloud effectively and move towards maturity, your organization must lessen, and eventually remove, the importance of individual servers. Servers are fleeting; applications are more important. In the realm of IaaS, instances and/or containers replace servers. Instances, and even more so, containers, are designed to be volatile. In fact, with proper automation in place, instances and containers come and go with little or no impact to applications. Applications stick around, underpinned by instances, containers, and automation. Availability and partition tolerance are easily achieved with proper automation and design.
Platform-as-a-Service (PaaS), underpinned by IaaS, is even more application centric, and relies on automation even more so than IaaS. In a properly designed PaaS implementation, automation allows the end users, the application owners, to place their applications into the cloud without having to build the instances and/or containers. PaaS users either supply the deployment artifacts, or use PaaS Ci/CD services to build and deploy the artifacts. In fact, for PaaS subscribers, the term "environment" takes on new meaning; it is the intersection of code and logical application definition, controlled by CI/CD processes and automation. They don't care where their applications run, just as long as they run and their users can successfully use them to complete their respective tasks.
Automation and Indirection
Automation is in place to abstract the need for PaaS users to build and maintain IaaS resources. There is also a layer of indirection that exists between the applications and the underlying infrastructure. With PaaS, application owners never need to worry about that underlying infrastructure; instead, they focus on code and application definitions. Overtime, this places them in the same category with application users, or even SaaS subscribers.
This PaaS-like abstraction and layer of indirection should also be the goal of cloud-enablement teams that are using IaaS resources to deliver services to application teams. With automation, and well-defined practices, and well-designed stacks, cloud-enablement teams are now able to deliver more self-service resources with IaaS. This self-service allows development and deployment teams to consume IaaS similarly to PaaS users. With proper automation, there is little to no manual configuration or intervention needed by the application teams.
Where are the Servers?
And, where are the servers? They are forgotten, replaced by instances, containers, and stacks, whose count shrinks and grows with the needs of the individual teams consuming them. Configuration-as-code and automation, for resource formation and autoscaling have reduced the need for manual intervention.
So, if you really want to enter the cloud and be successful, be prepared to abandon your servers.
Sunday, March 20, 2016
A Prototyping Platform with Jenkins Pipeline
So, I have been using the Jenkins Pipeline Plugin (formerly known as Workflow) for a few months now. I like the idea of being able to code Groovy and Java directly into the Jenkins jobs, as Pipeline scripts. Though I have not used it yet, I can also store the scripts in an SCM. I think that I will eventually transition to that mode, having used inline scripting to prototype the jobs first.
MongoDB Pipeline
My latest Pipeline script parses a JSON file from an upstream job, munges the data, and then writes a new JSON document into MongoDB. For MongoDB integration, I chose to NOT use the existing Jenkins MongoDB plugins; I needed more flexibility. Since I know my way around Mongo and Java integration (MongoDB and SpringData), and I have admin rights to my Jenkins instance, I simply added the MongoDB Java Driver Jar file (mongo-java-driver-3.0.4.jar) to the Jenkins classpath via the WEB-INF/lib directory. This enables me to use the MongoDB Java Driver class files in my Groovy pipeline scripts, as seen below.
import com.mongodb.*
stage 'parseData'
node {
String path = env.HOME + "/Home/jobs/DataAPI/jobs/" + upstreamJob + "/workspace/" + file
if (fileExists(path)) {
println "File Exists"
def file = readFile path
def jsonSlurper = new groovy.json.JsonSlurper()
def object = jsonSlurper.parseText(file)
def target = object.get(0).target
def dataPoints = object.get(0).datapoints
if (dataPoints.size == 0) {
error 'No data found.'
} else {
println "Datapoints: " + dataPoints.size
Map<String,Object> dataMap = new HashMap<String,Object>()
List<Integer> seriesData = new ArrayList<Integer>()
List<List<Integer>> seriesList = new ArrayList<List<Integer>>()
dataMap.put("target",target)
for (Object x:dataPoints) {
if (x[0] != null) {
seriesData.add(Integer.valueOf(x[0].intValue()))
seriesData.add(x[1])
seriesList.add(seriesData)
seriesData = new ArrayList<Integer>()
}
}
dataMap.put("series", seriesList)
if (new Boolean(writeToMongo) == true) {
def mongoClient = new MongoClient("localhost", 29009)
def collection = mongoClient.getDB("jenkins").getCollection("apiLogs")
collection.insert(new BasicDBObject(dataMap))
mongoClient.close()
}
}
} else {
error 'Data file does not exist at ' + path
}
}
In the above script, I used a Groovy JSON Slurper to parse the JSON from a file and build an object. Then I needed to munge the data into a more suitable Java object that could then be persisted directly to MongoDB via the Java API.
As a developer, I see this as a very strong case for Jenkins pipeline scripting. Without this approach, being able to write Groovy and Java code directly into the Pipeline project, I would be at the mercy of integrating other Jenkins plugins to make this work, probably spanning multiple jobs.
Now, I get it; part of the strength of Jenkins is its collection of plugins. However, as a long time Jenkins user and developer, I have had my share of plugin issues. It's a freeing experience to be able to "roll-my-own" customization. And, never has it been easier to integrate Groovy and Java then with the Pipeline plugin.
As a matter of fact, this project is part of an orchestration, two parameterized projects, triggered by a third. The "master" project is also a pipeline project; the script is below.
stage 'collect'
node {
build job: 'CollectData', parameters: [[$class: 'StringParameterValue', name: 'target', value: '<TOPIC_VALUE>'], [$class: 'StringParameterValue', name: 'from', value: '-15min'], [$class: 'StringParameterValue', name: 'format', value: 'json']]
}
stage 'parse'
node {
build job: 'ParseData', parameters: [[$class: 'StringParameterValue', name: 'file', value: 'data.json'], [$class: 'StringParameterValue', name: 'upstreamJob', value: 'CollectData']]
}
Of course, this is made very easy by using the Snippet Generator that is part of every pipeline project.
You can also use the DSL reference, found here: http://<JENKINS_URL>/workflow-cps-snippetizer/dslReference, and the introduction found on GitHub: https://github.com/jenkinsci/workflow-plugin. The plugin page is found here: https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin. And, fianlly, Andy Pemberton has written a reference card found here: https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow.
A Paradigm Shift
In my opinion, the ability to freeform program so easily in the Pipeline project is a game changer for Jenkins users. With this functionality, Jenkins is now a prototyping platform for CI/CD/DevOps as well as Integration and Monitoring. Sure, we will still use and write plugins. For example, in my orchestration, I used the HTTP Request Plugin in my Parameterized Build.
I used this plugin to make an HTTP API call, and passed the build parameters directly to the HTTP GET call as query_string arguments. Now, you may ask why, if I am so stoked about Jenkins Pipeline, did I not use "curl" in a shell block in the pipeline. Simple, I did not want blocking IO in the pipeline script. Instead, I chose to isolate this call into a separate upstream job, and use a downstream pipeline script to munge the downloaded data.
Security
Using the Jenkins Pipeline plugin does not mean that we abandon all we know about Jenkins security and best practices. In fact, users without Overall/Run Scripts will use the Groovy Sandbox with pre-approved scripts. Of course, users can elect to not use the sandbox. However, doing so means that all scripts require admin approval.
The Jenkins Reactor
In the example above, I have used Jenkins as a "batch reactor". With the Pipeline Plugin and orchestrated jobs, I have built a reactor that allows me to run multiple processes without leaving the context of the Jenkins environment. Who knows, in the future this orchestration may move to it's own application space. However, for now I am incubating the prototype in my "Jenkins Reactor". Using Jenkins this way provides me with the container and services I need to quickly integrate to other systems, and build a prototype application.
MongoDB Pipeline
My latest Pipeline script parses a JSON file from an upstream job, munges the data, and then writes a new JSON document into MongoDB. For MongoDB integration, I chose to NOT use the existing Jenkins MongoDB plugins; I needed more flexibility. Since I know my way around Mongo and Java integration (MongoDB and SpringData), and I have admin rights to my Jenkins instance, I simply added the MongoDB Java Driver Jar file (mongo-java-driver-3.0.4.jar) to the Jenkins classpath via the WEB-INF/lib directory. This enables me to use the MongoDB Java Driver class files in my Groovy pipeline scripts, as seen below.
import com.mongodb.*
stage 'parseData'
node {
String path = env.HOME + "/Home/jobs/DataAPI/jobs/" + upstreamJob + "/workspace/" + file
if (fileExists(path)) {
println "File Exists"
def file = readFile path
def jsonSlurper = new groovy.json.JsonSlurper()
def object = jsonSlurper.parseText(file)
def target = object.get(0).target
def dataPoints = object.get(0).datapoints
if (dataPoints.size == 0) {
error 'No data found.'
} else {
println "Datapoints: " + dataPoints.size
Map<String,Object> dataMap = new HashMap<String,Object>()
List<Integer> seriesData = new ArrayList<Integer>()
List<List<Integer>> seriesList = new ArrayList<List<Integer>>()
dataMap.put("target",target)
for (Object x:dataPoints) {
if (x[0] != null) {
seriesData.add(Integer.valueOf(x[0].intValue()))
seriesData.add(x[1])
seriesList.add(seriesData)
seriesData = new ArrayList<Integer>()
}
}
dataMap.put("series", seriesList)
if (new Boolean(writeToMongo) == true) {
def mongoClient = new MongoClient("localhost", 29009)
def collection = mongoClient.getDB("jenkins").getCollection("apiLogs")
collection.insert(new BasicDBObject(dataMap))
mongoClient.close()
}
}
} else {
error 'Data file does not exist at ' + path
}
}
In the above script, I used a Groovy JSON Slurper to parse the JSON from a file and build an object. Then I needed to munge the data into a more suitable Java object that could then be persisted directly to MongoDB via the Java API.
As a developer, I see this as a very strong case for Jenkins pipeline scripting. Without this approach, being able to write Groovy and Java code directly into the Pipeline project, I would be at the mercy of integrating other Jenkins plugins to make this work, probably spanning multiple jobs.
Now, I get it; part of the strength of Jenkins is its collection of plugins. However, as a long time Jenkins user and developer, I have had my share of plugin issues. It's a freeing experience to be able to "roll-my-own" customization. And, never has it been easier to integrate Groovy and Java then with the Pipeline plugin.
As a matter of fact, this project is part of an orchestration, two parameterized projects, triggered by a third. The "master" project is also a pipeline project; the script is below.
stage 'collect'
node {
build job: 'CollectData', parameters: [[$class: 'StringParameterValue', name: 'target', value: '<TOPIC_VALUE>'], [$class: 'StringParameterValue', name: 'from', value: '-15min'], [$class: 'StringParameterValue', name: 'format', value: 'json']]
}
stage 'parse'
node {
build job: 'ParseData', parameters: [[$class: 'StringParameterValue', name: 'file', value: 'data.json'], [$class: 'StringParameterValue', name: 'upstreamJob', value: 'CollectData']]
}
Of course, this is made very easy by using the Snippet Generator that is part of every pipeline project.
You can also use the DSL reference, found here: http://<JENKINS_URL>/workflow-cps-snippetizer/dslReference, and the introduction found on GitHub: https://github.com/jenkinsci/workflow-plugin. The plugin page is found here: https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin. And, fianlly, Andy Pemberton has written a reference card found here: https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow.
A Paradigm Shift
In my opinion, the ability to freeform program so easily in the Pipeline project is a game changer for Jenkins users. With this functionality, Jenkins is now a prototyping platform for CI/CD/DevOps as well as Integration and Monitoring. Sure, we will still use and write plugins. For example, in my orchestration, I used the HTTP Request Plugin in my Parameterized Build.
I used this plugin to make an HTTP API call, and passed the build parameters directly to the HTTP GET call as query_string arguments. Now, you may ask why, if I am so stoked about Jenkins Pipeline, did I not use "curl" in a shell block in the pipeline. Simple, I did not want blocking IO in the pipeline script. Instead, I chose to isolate this call into a separate upstream job, and use a downstream pipeline script to munge the downloaded data.
Security
Using the Jenkins Pipeline plugin does not mean that we abandon all we know about Jenkins security and best practices. In fact, users without Overall/Run Scripts will use the Groovy Sandbox with pre-approved scripts. Of course, users can elect to not use the sandbox. However, doing so means that all scripts require admin approval.
The Jenkins Reactor
In the example above, I have used Jenkins as a "batch reactor". With the Pipeline Plugin and orchestrated jobs, I have built a reactor that allows me to run multiple processes without leaving the context of the Jenkins environment. Who knows, in the future this orchestration may move to it's own application space. However, for now I am incubating the prototype in my "Jenkins Reactor". Using Jenkins this way provides me with the container and services I need to quickly integrate to other systems, and build a prototype application.
Wednesday, February 17, 2016
Speaking at RVA AWS Meetup on 2016-02-17 about Jenkins and AWS Integration for CI/CD
Central VA AWS User Group - 2016-02-17
I will be speaking on Jenkins integration to AWS EC2, CodeCommit, CodeDeploy, and CodePipeline. The talk location is:
Tuckahoe Library
1901 Starling Dr., Henrico, VA (map)
Jimmy Ray with AuthX Consulting will present Jenkins usage with AWS.
• Jenkins concepts
• Options for Jenkins in AWS (yum, AWS Marketplace, etc.)
• Configuring Jenkins in AWS (Setup, Plugins, Proxies(NGINX), Route 53/ELB, Security (Jenkins, Groups, SSL, etc.))
• EC2 Roles
• Jenkins Slaves
• AWS CodeDeploy
• AWS CodeCommit
• AWS CodePipeline
I will be speaking on Jenkins integration to AWS EC2, CodeCommit, CodeDeploy, and CodePipeline. The talk location is:
Tuckahoe Library
1901 Starling Dr., Henrico, VA (map)
Jimmy Ray with AuthX Consulting will present Jenkins usage with AWS.
• Jenkins concepts
• Options for Jenkins in AWS (yum, AWS Marketplace, etc.)
• Configuring Jenkins in AWS (Setup, Plugins, Proxies(NGINX), Route 53/ELB, Security (Jenkins, Groups, SSL, etc.))
• EC2 Roles
• Jenkins Slaves
• AWS CodeDeploy
• AWS CodeCommit
• AWS CodePipeline
Jenkins has it own Pipeline plugin (formerly know as Workflow). Information can be found here:
Below is an example pipeline Groovy script that I wrote to build and deploy a CMS application from code in GitHub. Maven builds the project, and then shell scripts perform the deployment to AWS EC2 instances. Without Jenkins Pipeline, this would have been more complex involving multiple jobs. Though it still can be broken down into multiple jobs, this particular example was done in a single Jenkins job.
stage 'build'
node {
git url: 'git@github.com:co/XYZ.git', credentialsId: 'co-XYZ-jenkins', branch: 'develop'
def v = version()
if (v) {
echo "Building version ${v}"
}
def mvnHome = tool 'Maven 3.3.3'
sh "export JAVA_HOME=/opt/jdk1.8.0_60/ && ${mvnHome}/bin/mvn -f hippo/pom.xml clean verify -Dbuild.number=" + env.BUILD_NUMBER
step([$class: 'ArtifactArchiver', artifacts: '**/target/site.war,**/target/cms.war', fingerprint: true])
}
stage 'tag'
node {
echo "Build-" + env.BUILD_TAG
dir(env.HOME + '/workspace/XYZ-CMS/TEST/TestWF/hippo') {
sh 'git tag -a ' + env.BUILD_TAG + ' -m "Auto-tagging build number ' + env.BUILD_TAG + ' from Jenkins."'
sh 'git push origin ' + env.BUILD_TAG
}
}
stage 'undeploy'
node {
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo rm -rf /usr/share/tomcat8/webapps/cms.war'"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo rm -rf /usr/share/tomcat8/webapps/cms'"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo rm -rf /usr/share/tomcat8/webapps/site.war'"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo rm -rf /usr/share/tomcat8/webapps/site'"
}
stage 'restartTomcat8'
node {
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo service tomcat8 restart'"
}
stage 'restartNGINX'
node {
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo nginx -s reload'"
}
stage 'deployCms'
node {
sh "scp -i " + env.G_BUILD_TEST_KEY + " " + env.HOME + "/workspace/XYZ-CMS/TEST/TestWF/hippo/cms/target/cms.war " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + ":~"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo chmod 755 ~ec2-user/cms.war'"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo cp ~ec2-user/cms.war /usr/share/tomcat8/webapps'"
sleep 120
}
stage 'deploySite'
node {
sh "scp -i " + env.G_BUILD_TEST_KEY + " " + env.HOME + "/workspace/XYZ-CMS/TEST/TestWF/hippo/site/target/site.war " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + ":~"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo chmod 755 ~ec2-user/site.war'"
sh "ssh -i " + env.G_BUILD_TEST_KEY + " " + env.G_EC2_USER + "@" + env.G_XYZ_CMS_TEST_IP + " 'sudo cp ~ec2-user/site.war /usr/share/tomcat8/webapps'"
}
stage 'testCms'
node {
def sout = new StringBuilder(), serr = new StringBuilder()
def proc = "curl -s -o /dev/null -I -w '%{http_code}' http://cms-test.xyz.com/cms".execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(30000)
//println "out> $sout err> $serr"
echo sout.toString()
}
stage 'testSite'
node {
def sout = new StringBuilder(), serr = new StringBuilder()
def proc = "curl -s -o /dev/null -I -w '%{http_code}' http://site-test.xyz.com".execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(30000)
//println "out> $sout err> $serr"
echo sout.toString()
}
def version() {
def matcher = readFile('hippo/pom.xml') =~ '<version>(.+)</version>'
matcher ? matcher[1][1] : null
}
Subscribe to:
Posts (Atom)