Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: docker/ShibbIdP_ConfigBuilder_Container
base: IdP3.3
Choose a base ref
...
head repository: docker/ShibbIdP_ConfigBuilder_Container
compare: master
Choose a head ref
Able to merge. These branches can be automatically merged.
Showing with 905 additions and 460 deletions.
  1. +13 −15 Dockerfile
  2. +12 −4 Dockerfile.template
  3. +4 −4 Dockerfile.windows.template
  4. +151 −65 Jenkinsfile
  5. +4 −7 README.md
  6. +3 −2 common.bash
  7. +111 −363 configBuilder.sh
  8. +31 −0 corretto-signing-key.pub
  9. +288 −0 duo-oidc-truststore.asc
  10. +288 −0 oidc-common-truststore.asc
28 changes: 13 additions & 15 deletions Dockerfile
@@ -1,24 +1,22 @@
FROM centos:latest
FROM --platform=$TARGETPLATFORM rockylinux:8.8

# Install needed utils
RUN rm -fr /var/cache/yum/* && yum clean all && yum -y install --setopt=tsflags=nodocs epel-release && \
yum -y install wget zip unzip rsync openssl && \
yum -y install wget zip unzip rsync openssl java-latest-openjdk && \
yum -y clean all

#download/install Java
ENV JAVA_HOME /usr
# Install Corretto Java JDK
#Corretto download page: https://docs.aws.amazon.com/corretto/latest/corretto-11-ug/downloads-list.html
#ARG CORRETTO_URL_PERM=https://corretto.aws/downloads/latest/amazon-corretto-11-x64-linux-jdk.rpm
#ARG CORRETTO_RPM=amazon-corretto-11-x64-linux-jdk.rpm
#COPY corretto-signing-key.pub .
#RUN curl -O -L $CORRETTO_URL_PERM \
# && rpm --import corretto-signing-key.pub \
# && rpm -K $CORRETTO_RPM \
# && rpm -i $CORRETTO_RPM \
# && rm -r corretto-signing-key.pub $CORRETTO_RPM
#ENV JAVA_HOME=/usr/lib/jvm/java-11-amazon-corretto

# Install Zulu Java
RUN rpm --import http://repos.azulsystems.com/RPM-GPG-KEY-azulsystems \
&& curl -o /etc/yum.repos.d/zulu.repo http://repos.azulsystems.com/rhel/zulu.repo \
&& yum -y install zulu-8 && alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 200000

#RUN wget -nv --no-cookies --no-check-certificate "http://javadl.oracle.com/webapps/download/AutoDL?BundleId=233161_512cd62ec5174c3487ac17c61aaa89e8" -O /tmp/jre-8u171-linux-x64.rpm && \
# yum -y install /tmp/jre-8u171-linux-x64.rpm && \
# rm -f /tmp/jre-8u171-linux-x64.rpm && \
# alternatives --install /usr/bin/java jar $JAVA_HOME/bin/java 200000 && \
# alternatives --install /usr/bin/javaws javaws $JAVA_HOME/bin/javaws 200000 && \
# alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 200000

#copy files
RUN mkdir -p /output && mkdir -p /scriptrun
16 changes: 12 additions & 4 deletions Dockerfile.template
@@ -1,4 +1,4 @@
FROM tier/shib-idp:latest
FROM i2incommon/shib-idp:latest5

# The build args below can be used at build-time to tell the build process where to find your config files. This is for a completely burned-in config.
ARG TOMCFG=config/tomcat
@@ -11,15 +11,23 @@ ARG SHBEDWAPP=config/shib-idp/edit-webapp
ARG SHBMSGS=config/shib-idp/messages
ARG SHBMD=config/shib-idp/metadata

# copy in the needed config files
# copy in those needed config files
ADD ${TOMCFG} /usr/local/tomcat/conf
ADD ${TOMCERT} /opt/certs
ADD ${TOMWWWROOT} /usr/local/tomcat/webapps/ROOT
ADD ${SHBCFG} /opt/shibboleth-idp/conf
ADD ${SHBCREDS} /opt/shibboleth-idp/credentials
ADD ${SHBVIEWS} /opt/shibboleth-idp/views
ADD ${SHBEDWAPP} /opt/shibboleth-idp/edit-webapp
ADD ${SHBMSGS} /opt/shibboleth-idp/messages
#ADD ${SHBEDWAPP} /opt/shibboleth-idp/edit-webapp
#ADD ${SHBMSGS} /opt/shibboleth-idp/messages
ADD ${SHBMD} /opt/shibboleth-idp/metadata

# new for 4.1.0+: install the Duo OIDC integration
# https://wiki.shibboleth.net/confluence/display/IDPPLUGINS/DuoOIDCAuthnConfiguration
# For unattended install of plugins, trust must be manually bootstrapped. You should never automate the retreival of this file (like this) for production.
#ADD https://github.internet2.edu/raw/docker/ShibbIdP_ConfigBuilder_Container/master/oidc-common-truststore.asc /opt/shibboleth-idp/credentials/net.shibboleth.idp.plugin.authn.duo.nimbus/truststore.asc
#ADD https://github.internet2.edu/raw/docker/ShibbIdP_ConfigBuilder_Container/master/duo-oidc-truststore.asc /opt/shibboleth-idp/credentials/net.shibboleth.oidc.common/truststore.asc
#install the plugins
#RUN /opt/shibboleth-idp/bin/plugin.sh --noPrompt -i https://shibboleth.net/downloads/identity-provider/plugins/oidc-common/1.0.0/oidc-common-dist-1.0.0.zip
#RUN /opt/shibboleth-idp/bin/plugin.sh --noPrompt -i https://shibboleth.net/downloads/identity-provider/plugins/duo-oidc/1.0.0/idp-plugin-duo-nimbus-dist-1.0.0.zip

8 changes: 4 additions & 4 deletions Dockerfile.windows.template
@@ -1,4 +1,4 @@
FROM tier/shibbidp_novm_windows:latest
FROM tier/shib-idp-windows:latest

#params for supplying your IdP config to your container (can be overridden at build-time using build-args)
ARG TOMCFG=config\\tomcat
@@ -18,9 +18,9 @@ ADD $TOMCERT c:\\opt\\certs
ADD $TOMWWWROOT c:\\Tomcat\\webapps\\ROOT
ADD $SHBCFG c:\\opt\\shibboleth-idp\\conf
ADD $SHBCREDS c:\\opt\\shibboleth-idp\\credentials
ADD $SHBVIEWS c:\\opt\\shibboleth-idp\\views
ADD $SHBEDWAPP c:\\opt\\shibboleth-idp\\edit-webapp
ADD $SHBMSGS c:\\opt\\shibboleth-idp\\messages
#ADD $SHBVIEWS c:\\opt\\shibboleth-idp\\views
#ADD $SHBEDWAPP c:\\opt\\shibboleth-idp\\edit-webapp
#ADD $SHBMSGS c:\\opt\\shibboleth-idp\\messages
ADD $SHBMD c:\\opt\\shibboleth-idp\\metadata

# Uncomment if using secrets; removes existing files from the container so that secrets can propagate (issue with Windows containers)
216 changes: 151 additions & 65 deletions Jenkinsfile
@@ -1,75 +1,160 @@
node {

stage 'Checkout'
pipeline {
agent { node { label 'docker-multi-arch' } }
environment {
maintainer = "t"
imagename = 's'
tag = 'l'
DOCKERHUBPW=credentials('tieradmin-dockerhub-pw')

checkout scm

stage 'Acquire util'

sh 'mkdir -p tmp && mkdir -p bin'
dir('tmp'){
git([ url: "https://github.internet2.edu/docker/util.git",
credentialsId: "jenkins-github-access-token" ])
sh 'mv ./bin/* ../bin/.'
}
sh 'rm -rf tmp'
stages {
stage('Setting build context') {
steps {
script {
maintainer = maintain()
imagename = imagename()
if(env.BRANCH_NAME == "master") {
tag = "latest"
} else {
tag = env.BRANCH_NAME.toLowerCase()
}
if(!imagename){
echo "You must define an imagename in common.bash"
currentBuild.result = 'FAILURE'
}
sh 'mkdir -p tmp && mkdir -p bin'
dir('tmp'){
git([ url: "https://github.internet2.edu/docker/util.git", credentialsId: "jenkins-github-access-token" ])
sh 'rm -rf ../bin/*'
sh 'mv ./bin/* ../bin/.'
}
// Build and test scripts expect that 'tag' is present in common.bash. This is necessary for both Jenkins and standalone testing.
// We don't care if there are more 'tag' assignments there. The latest one wins.
sh "echo >> common.bash ; echo \"tag=\\\"${tag}\\\"\" >> common.bash ; echo common.bash ; cat common.bash"
}
}
}
stage('Clean') {
steps {
script {
try{
sh 'bin/destroy.sh >> debug'
} catch(error) {
def error_details = readFile('./debug');
def message = "BUILD ERROR: There was a problem building the Base Image. \n\n ${error_details}"
sh "rm -f ./debug"
handleError(message)
}
}
}
}
stage('Build') {
steps {
script {
try{
sh 'docker login -u tieradmin -p $DOCKERHUBPW'
// fails if already exists
// sh 'docker buildx create --use --name multiarch --append'
sh 'docker buildx inspect --bootstrap'
sh 'docker buildx ls'
sh "docker buildx build --platform linux/amd64 -t ${imagename}_${tag} --load ."
sh "docker buildx build --platform linux/arm64 -t ${imagename}_${tag}:arm64 --load ."
} catch(error) {
def error_details = readFile('./debug');
def message = "BUILD ERROR: There was a problem building ${maintainer}/${imagename}:${tag}. \n\n ${error_details}"
sh "rm -f ./debug"
handleError(message)
}
}
}
}
stage('Scan') {
steps {
script {
try {
echo "Starting security scan..."
// Install trivy and HTML template
sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.31.1'
sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/html.tpl > html.tpl'

stage 'Setting build context'

def maintainer = maintainer()
def imagename = imagename()
def tag

// Tag images created on master branch with 'latest'
if(env.BRANCH_NAME == "master"){
tag = "latest"
}else{
tag = env.BRANCH_NAME
}

if(!imagename){
echo "You must define an imagename in common.bash"
currentBuild.result = 'FAILURE'
}
if(maintainer){
echo "Building ${imagename}:${tag} for ${maintainer}"
}

stage 'Build'
try{
sh 'bin/build.sh &> debug'
} catch(error) {
def error_details = readFile('./debug');
def message = "BUILD ERROR: There was a problem building ${imagename}:${tag}. \n\n ${error_details}"
sh "rm -f ./debug"
handleError(message)
// Scan container for all vulnerability levels
echo "Scanning for all vulnerabilities..."
sh 'mkdir -p reports'
// 2 commented scans below are OS-only, in case timeout issues occur
sh "trivy image --timeout 10m --ignore-unfixed --vuln-type os,library --severity CRITICAL,HIGH --no-progress --security-checks vuln --format template --template '@html.tpl' -o reports/container-scan.html ${imagename}_${tag}"
// sh "trivy image --ignore-unfixed --vuln-type os --severity CRITICAL,HIGH --no-progress --security-checks vuln --format template --template '@html.tpl' -o reports/container-scan.html ${imagename}_${tag}"
sh "trivy image --timeout 10m --ignore-unfixed --vuln-type os,library --severity CRITICAL,HIGH --no-progress --security-checks vuln --format template --template '@html.tpl' -o reports/container-scan-arm.html ${imagename}_${tag}:arm64"
// sh "trivy image --ignore-unfixed --vuln-type os --severity CRITICAL,HIGH --no-progress --security-checks vuln --format template --template '@html.tpl' -o reports/container-scan-arm.html ${imagename}_${tag}:arm64"
publishHTML target : [
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'reports',
reportFiles: 'container-scan.html',
reportName: 'Security Scan',
reportTitles: 'Security Scan'
]
publishHTML target : [
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'reports',
reportFiles: 'container-scan-arm.html',
reportName: 'Security Scan (ARM)',
reportTitles: 'Security Scan (ARM)'
]
// Scan again and fail on CRITICAL vulns
//below can be temporarily commented to prevent build from failing
echo "Scanning for CRITICAL vulnerabilities only (fatal)..."
// 2 scans below are temp (os scan only, no lib scan), while timeout issues are worked
// sh "trivy image --ignore-unfixed --vuln-type os,library --exit-code 1 --severity CRITICAL ${imagename}_${tag}"
// sh "trivy image --ignore-unfixed --vuln-type os,library --exit-code 1 --severity CRITICAL ${imagename}_${tag}:arm64"
sh "trivy image --ignore-unfixed --vuln-type os --exit-code 1 --severity CRITICAL ${imagename}_${tag}"
sh "trivy image --ignore-unfixed --vuln-type os --exit-code 1 --severity CRITICAL ${imagename}_${tag}:arm64"
//echo "Skipping scan for CRITICAL vulnerabilities (temporary)..."
} catch(error) {
def error_details = readFile('./debug');
def message = "BUILD ERROR: There was a problem scanning ${imagename}:${tag}. \n\n ${error_details}"
sh "rm -f ./debug"
handleError(message)
}
}
}
}
stage('Push') {
steps {
script {
sh 'docker login -u tieradmin -p $DOCKERHUBPW'
// fails if already exists
// sh 'docker buildx create --use --name multiarch --append'
sh 'docker buildx inspect --bootstrap'
sh 'docker buildx ls'
echo "Pushing image to dockerhub..."
sh "docker buildx build --push --platform linux/arm64,linux/amd64 -t ${maintainer}/${imagename}:${tag} ."
}
}
}
stage('Notify') {
steps{
echo "$maintainer"
slackSend color: 'good', message: "$maintainer/$imagename:$tag pushed to DockerHub"
}
}
}

/* stage 'Tests'
try{
sh 'bin/test.sh &> debug'
} catch(error) {
def error_details = readFile('./debug');
def message = "BUILD ERROR: There was a problem building ${imagename}:${tag}. \n\n ${error_details}"
sh "rm -f ./debug"
handleError(message)
}*/

stage 'Push'

docker.withRegistry('https://registry.hub.docker.com/', "dockerhub-$maintainer") {
def baseImg = docker.build("$maintainer/$imagename")
baseImg.push("$tag")
post {
always {
echo 'Done Building.'
}
failure {
// slackSend color: 'good', message: "Build failed"
handleError("BUILD ERROR: There was a problem building ${maintainer}/${imagename}:${tag}.")
}
}

stage 'Notify'

slackSend color: 'good', message: "$maintainer/$imagename:$tag pushed to DockerHub"

}

def maintainer() {

def maintain() {
def matcher = readFile('common.bash') =~ 'maintainer="(.+)"'
matcher ? matcher[0][1] : 'tier'
}
@@ -83,6 +168,7 @@ def handleError(String message){
echo "${message}"
currentBuild.setResult("FAILED")
slackSend color: 'danger', message: "${message}"
//step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'chubing@internet2.edu', sendToIndividuals: true])
//step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'pcaskey@internet2.edu', sendToIndividuals: true])
sh 'exit 1'
}

11 changes: 4 additions & 7 deletions README.md
@@ -1,22 +1,19 @@
# ShibbIdP_ConfigBuilder_Container

This container runs the configBuilder script and generates a Dockerfile (and related dependencies) along with a default TIER Shibboleth IdP config, customized based on the user's reponse to a few questions.
This container runs the configBuilder script and generates a Dockerfile (and related dependencies) along with a default TAP Shibboleth IdP config, customized based on the user's reponse to a few questions.

The config is written to /output in the container, which users should bind-mount to a directory of their choosing (best to use an empty directory).

The result is a set of files and directories containing everything needed to build a TIER Shibboleth IdP container. This includes the Dockerfile and related dependencies, along with the default TIER IdP config.

Once the files have been written to your directory, the container terminates and can be deleted.

Build this container like this:
docker build -t tierconfigbuilder .

Run the container like this:
docker run --interactive --tty -v $PWD:/output -e "BUILD_ENV=LINUX" tier/shibbidp_configbuilder_container
You can run the container directly from the docker hub like this:
docker run -it -v $PWD:/output -e "BUILD_ENV=LINUX" tier/shibbidp_configbuilder_container

-OR, for a Windows container, like this-

docker run --interactive --tty -v $PWD:/output -e "BUILD_ENV=WINDOWS" tier/shibbidp_configbuilder_container
docker run -it -v $PWD:/output -e "BUILD_ENV=WINDOWS" tier/shibbidp_configbuilder_container

After answering the questions in the configBuilder, your config will be written to several files and directories in the directory you mounted in the 'docker run' command above. The output defaults to placing certain IdP config files into a 'SECRETS' folder at the root to a) remove them from the rest of the config files so that b) the remaining config files can be easily burned into the container.

5 changes: 3 additions & 2 deletions common.bash
@@ -1,5 +1,6 @@
registry="docker.io"
maintainer="tier"
maintainer="i2incommon"
previous_maintainer="tier"
basename="shibbidp_configbuilder_container"
imagename="shibbidp_configbuilder_container"
version="0.3"
version="0.8"