Behind the build: securing Java containers for card payment compliance
Modern cloud architectures have embraced containers, but with agility comes concern. An alarming 87% of container images running in production contain high or critical vulnerabilities. The annual cost of non-compliance averages $14.8 million (versus $5.5 million for maintaining compliance).
Especially when operating in a PCI-DSS (the Payment Card Industry Data Security Standard) regulated cloud environment, hardened containers are essential.
Over the past two years as a Platform Engineering Lead, I’ve been strengthening our Java container security on AWS to reduce attack surface and vulnerability exposure in payment card processing, where security is paramount.
We have gained insights that go beyond checking compliance boxes. This has resulted in improved audit readiness, reduced vulnerability footprint, and minimized business risk. They show that investing in container hardening is not just an IT task, but a business imperative.
For technology leaders, our experience offers a practical roadmap:
Pick secure, widely-supported foundations
Minimize what you deploy
Automate vulnerability detection and updates
Lock down runtime environments
Cultivate a security-first team mindset
Read on to see how we built resilient systems capable of withstanding real-world threats while meeting PCI-DSS requirements.
The strategic importance of container hardening
In PCI-DSS environments, container hardening delivers three key benefits:
Audit readiness: Our containers remained consistently compliant, turning audit preparation from a fire drill into a routine process.
Security improvements: According to Gartner, "at least 99% of cloud security failures are the customer's fault". Every eliminated vulnerability reduced the attack surface, directly lowering breach risk.
Accelerated development: By integrating security measures into our pipeline, we caught problems early and avoided late-stage surprises, allowing teams to innovate quickly within security guardrails.
Base image selection: balancing security and practicality
We started by selecting a secure base container image. Any vulnerabilities in your base image persist in all containers built from it.
We initially experimented with Google's Distroless Java image, which provided fewer vulnerabilities than full OS images. However, Distroless presented practical challenges: it used Bazel rather than Dockerfiles (introducing friction), and potential maintenance concerns emerged with Google's changing priorities.
We evaluated specialized secure base images like Chainguard's Wolfi-based containers. While attractive for their security focus, these niche images lacked the broad user base we wanted for production reliability.
Ultimately, we standardized on Amazon Corretto images with Alpine. This choice hit our sweet spot:
Alpine proved exceptionally slim and showed zero critical or high vulnerabilities in scans
AWS actively maintains Corretto with consistent security updates
The images worked seamlessly with our AWS infrastructure, with no developer learning curve
After switching to Corretto Alpine, our container scans consistently returned clean for high-severity issues. One API container went from dozens of flagged vulnerabilities to zero high/critical findings.
Minimizing the Java attack surface
We next focused on the application layer, guided by one principle: reduce what you ship.
Slimming the JVM with JLINK
We used JLINK to create trimmed Java runtimes that excluded unnecessary modules. The impact was significant:
Runtime size shrank to less than half of a standard JRE
Every excluded module eliminated potential vulnerabilities - if the module doesn't exist, it can't be exploited
From a compliance perspective, a smaller runtime meant fewer items in vulnerability scans. If an auditor inquired about a JDK CVE, our custom runtime often didn't include the affected module, simplifying the discussion to "not applicable."
Modularizing the JDK helped eliminate both risk and compliance overhead.
Managing application dependencies
We implemented two key practices:
Dependency reduction: We audited services to remove unnecessary libraries, reducing both vulnerability surface and deployment size.
Continuous updates: We automated dependency monitoring using Trivy and the Renovate bot, which opened PRs when library updates were available.
To enforce this, we integrated scanning tools into CI. Trivy in our pipeline would break builds for critical or high-severity vulnerabilities, creating strong incentives for proactive dependency management.
Handling vulnerability findings
Setting up vulnerability scanners is straightforward, but the real challenge begins when they flag issues. The scanner can't tell you if a vulnerability is actually exploitable in your specific context. That requires human judgment.
In our experience, vulnerability findings rarely present clear-cut decisions. We once encountered a critical JDK font-rendering vulnerability that our scanners flagged with alarming severity. But for our headless API services that never process fonts, was this truly a risk? Meanwhile, delaying deployment meant postponing other important security fixes in the same release.
The most important practice we implemented was a regular reassessment process. Any vulnerability we decided to temporarily accept was documented with clear justification and assigned a review date. Every two weeks, we'd revisit these decisions to check if patches had become available or if our risk assessment had changed. This prevented the all-too-common "set it and forget it" pattern where temporary exceptions become permanent blind spots.
This pragmatic process helped us balance security with delivery. It acknowledged that perfect security is unattainable, but regular, mindful review keeps risk within acceptable bounds.
Build-time vs. runtime scanning
Even with rigorous build-time scanning, new vulnerabilities emerge daily. We addressed this through:
Runtime monitoring: A container security platform that continuously checked running containers against newly disclosed CVEs, providing near real-time vulnerability alerts.
Daily rebuilds: We adopted automatic daily builds of our main services. By rebuilding frequently, we picked up security patches for base images and dependencies as they became available. This regular cadence meant our vulnerability exposure window rarely exceeded 24 hours.
When something like Log4Shell would hit, we would have high confidence that our next daily build would automatically incorporate the fix, limiting our exposure window. This approach contrasts sharply with industry norms - most images on Docker Hub hadn't been updated in over 120 days.
Operational safeguards
Hardening extends to runtime configurations. We implemented:
Read-only root filesystem: In AWS ECS/Fargate, we enabled read-only filesystem mounts. If attackers breached a container, they couldn't install malware or persist changes. We mounted separate volumes for /tmp and logs, configuring Java applications to use these writable areas.
Dropped privileges: Our containers ran as non-root users, further limiting damage potential during a breach.
Network and resource constraints: AWS security groups limited traffic between services, and we enforced CPU/memory limits to prevent compromised containers from affecting others.
These runtime safeguards demonstrated defense-in-depth to auditors while improving operational stability. With immutable infrastructure, we eliminated configuration drift and ensured consistency across environments.
Conclusion: security as a strategic investment
Hardening Java containers under PCI-DSS scrutiny transformed from a compliance task into a strategic advantage:
Reduced breach risk: We eliminated common weaknesses and implemented defense layers, addressing both real security and compliance requirements.
Faster delivery: Automated security checks in CI/CD meant developers spent less time on security retrofits. Our ability to ship updates daily with confidence became a competitive advantage.
Long-term cost reduction: While requiring upfront investment, our approach avoided costly emergency patching and compliance remediation projects.
Organizational confidence: The entire organization gained confidence knowing our systems were hardened against threats.
By treating container hardening as an ongoing strategic initiative, you turn compliance requirements into organizational strength.
Janne Sinivirta is a seasoned software architect and Principal DevOps Consultant at Polar Squad with over 25 years of experience in the software industry. He specializes in helping teams and organizations improve how they build, ship, and run software. Janne has led architectural design, implementation and delivery in industries ranging from healthcare to media, and brings a hands-on approach to both technical challenges and organizational change.