Here I write about things that have been washed ashore during my life as a Java software developer. Sometimes it might be some snippet or a rant, but I don't generally follow a specific topic or theme. I'm not a native english speaker, so bear with me.
15.09.2025
The goal is to replicate values from a Vault secret to K8S. There is a K8S operator called external-secrets (ESO) for this purpose. It can be deployed quite conveniently with Helm:
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets -n external-secrets --create-namespace
Next, we should first create a test secret in Vault.
vault kv put secret/foo my-value=mytopsecretvalue
On to the manifests!
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "https://vault.myhomenet.lan"
path: "secret"
version: "v2"
caBundle: "..." # Base64-encoded CA certificate, see explanation below
auth:
tokenSecretRef:
name: "vault-token"
key: "token"
What’s happening here?
kind: SecretStore
This is ESO’s way of defining a connection backend. Here, we’re telling ESO:
“Whenever you need to fetch a secret, talk to this Vault instance.”
Vault provider details:
- server: The Vault URL (https://vault.myhomenet.lan).
- path: The mount path in Vault (secret/ in this case).
- version: v2: This specifies the KV secrets engine version. Version 2 supports multiple versions of the same secret.
- caBundle: A base64-encoded CA certificate, ensuring Kubernetes trusts Vault’s TLS connection.
- auth: How ESO authenticates against Vault. Here we’re using a static token stored in a Kubernetes Secret (vault-token).
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: example-sync
data:
- secretKey: foobar
remoteRef:
key: foo
property: my-value
Breaking it down:
kind: ExternalSecret This is where you declare what you want from Vault and how to present it in Kubernetes.refreshInterval: "15s" ESO checks Vault every 15 seconds to see if the secret changed. If it has, ESO updates the Kubernetes Secret automatically.secretStoreRef Tells ESO to use the Vault connection we defined earlier (vault-backend).target.name: example-sync ESO will create a native Kubernetes Secret called example-sync.data section:
- secretKey: foobar: Inside the Kubernetes Secret, the key will be foobar.
- remoteRef.key: foo: Tells ESO to look up the Vault secret at secret/foo.
- property: my-value: From that secret, only extract the field my-value.
Now, whenever the foo value changes in Vault, ESO will synchronize it into the secret eample-sync.
02.02.2025
How to create volumes and deploy Postgresql and pgAdmin
mkdir -p /disk1/postgres_data
mkdir -p /disk2/postgres_backup
chmod -R 777 /disk1/postgres_data
chmod -R 777 /disk2/postgres_backup
Docker Swarm deployment manifest
version: "3.8"
services:
postgres:
image: postgres:15
deploy:
placement:
constraints:
- "node.hostname == swarmnode1"
volumes:
- /disk1/postgres_data:/var/lib/postgresql/data
- /disk2/postgres_backup:/backup
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydatabase
networks:
- my_network
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
deploy:
placement:
constraints:
- "node.hostname == swarmnode1"
volumes:
- pgadmin_data:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: admin@example.com
PGADMIN_DEFAULT_PASSWORD: adminpassword
ports:
- "7676:80"
networks:
- my_network
networks:
my_network:
driver: overlay
volumes:
pgadmin_data:
29.04.2024
How to display server side field validation errors with Bootstrap and Thymeleaf
<form action="#" th:action="@{/}" th:object="${myForm}" method="post"> <!-- 'myform' is the form object -->
<!-- start row 1 -->
<div class="form row">
<!-- col 1 -->
<div class="col-lg-6">
<div class="form-group">
<label th:for="*{firstname}">First name</label>
<input th:field="*{firstname}" type="text" class="form-control"
th:classappend="${#fields.hasErrors('firstname')}? is-invalid" <!-- adds this class in case there is a field error-->
placeholder="first name"
aria-describedby="firstnameFeedback"> <!-- see server-side validation from Bootstrap docs -->
<div id="firstnameFeedback" th:if="${#fields.hasErrors('firstname')}"
th:errors="*{firstname}" class="invalid-feedback"> <!-- is initially not displayed -->
</div>
</div>
(...)
24.02.2024
This task was astonishingly hard to configure. In my K3S cluster I have a Traefik reverse proxy deployed. What I wanted to achieve was:
Step 1 involved opening http and https ports to my clusters master node IP address.
Traefik was quite easily deployed through its Helm-Chart. This is my values.yaml. As Ionos is my domain hoster, I'm using their DNS challenge provider to generate Let's Encrypt-Certificates:
additionalArguments:
- --entrypoints.websecure.http.tls.certresolver=ionos
- --entrypoints.websecure.http.tls.domains[0].main=mydomain.de
- --entrypoints.websecure.http.tls.domains[0].sans=*.mydomain.de
- --certificatesresolvers.ionos.acme.dnschallenge.provider=ionos
- --certificatesresolvers.ionos.acme.email=mailaddress@email.com
- --certificatesresolvers.ionos.acme.dnschallenge.resolvers=1.1.1.1
- --certificatesresolvers.ionos.acme.storage=/data/acme.json
deployment:
initContainers: #This is necessary or else Traefik is unable to create the acme.json file
- name: volume-permissions
image: traefik:v2.10.4
command:
[
"sh",
"-c",
"touch /data/acme.json; chown 65532 /data/acme.json; chmod -v 600 /data/acme.json",
]
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
volumeMounts:
- name: data
mountPath: /data
env:
- name: IONOS_API_KEY #Store the API key it to a secret. The format is public.private
valueFrom:
secretKeyRef:
key: IONOS_API_KEY
name: ionos-api-credentials
ingressRoute:
dashboard:
enabled: true
persistence:
enabled: true
path: /data
size: 128Mi
Now I needed to create a Traefik Middleware to request basic auth:
apiVersion: v1
kind: Secret
metadata:
name: basicauthcredentials
namespace: default
data:
users: |
dXNlcjokYXByMSQ1Y1FtYldnWiRWcXBjVTZRSTBRdnZrVlJJbGFlN0UvCgo= # created with `htpasswd -nb user password | openssl base64`
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: basicauthmiddleware
spec:
basicAuth:
secret: basicauthcredentials
After this, I am able to deploy an Ingress which will expose a service, provide it with a (wildcard) TLS certificate and use the basic auth middleware...voilà:
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: myapplicationingress
namespace: default
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: default-basicauthmiddleware@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: 'true'
spec:
ingressClassName: traefik
rules:
- host: myapplication.mydomain.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapplication-svc
port:
number: 8080
05.09.2023
A little stupid mistake of mine that cost me some time.
My application context would not start due to
java.lang.IllegalArgumentException: Not an managed type: class java.lang.Object
My interface looked like this:
public interface MyUserRepository<MyUser, Long> extends JpaRepository<MyUser, Long>
The correct definition, omitting the generic duplication which causes the error:
public interface MyUserRepository extends JpaRepository<MyUser, Long>
05.02.2022
Cheat-Sheet for restoring deleted files from Git repo
#Get a list of deleted files
git log --diff-filter=D --summary | grep delete
#Show commits where the deleted file was involved, copy latest commit hash to clipboard
git log --all -- <PATH_TO_DELETED_FILE>
#Restore it into the working copy with: (^ means the commit BEFORE the commit where file was deleted)
git checkout <COMMIT-HASH>^ -- <PATH_TO_DELETED_FILE>
25.01.2022
Cheat-Sheet to enable and use Minikube internal Docker registry
On Docker host machine, create or edit /etc/docker/daemon.json:
{
"insecure-registries" : ["192.168.49.2:5000"]
}
Save and restart Docker.
Delete an existing Minikube cluster:
minikube stop && minikube delete
Start minikube with insecure registry access enabled:
minikube start --insecure-registry "10.0.0.0/24"
Enable the registry addon:
minikube addons enable registry
Tag an existing image and push it to minikube registry:
docker tag 9999999.dkr.ecr.eu-central-1.amazonaws.com/my-registry/blah:latest $(minikube ip):5000/blah:latest
docker push $(minikube ip):5000/blah:latest
Deploy a pod with kubectl or install a Helm chart, referencing that image as
localhost:5000/blah:latest
Minikube should pull that image and start the pod.
09.06.2021
My task is to test a dockerized web application using Selenium. It is important that the tests are defined with Gherkin and of course run headless on Jenkins. Here is what I did to achieve this task.
This is a snippet from the class responsible to pull and instantiate the Webapp container which contains the build of my webapp to test:
@Slf4j
public class WebappContainer {
private static WebappContainer instance = new WebappContainer();
private static final String NETWORK_ALIAS = "WEBAPP";
private static final int EXPOSED_PORT = 7654;
private static final DockerImageName dockerImageName = DockerImageName
.parse("myecr.amazonaws.com/webapp/my-little-webapp:"
+ getWebappImageVersion());
public static final GenericContainer<?> container = new GenericContainer<>(dockerImageName)
.withNetwork(NetworkUtils.getNetwork())
.withNetworkAliases(NETWORK_ALIAS)
.waitingFor(Wait.forHttp("/").forStatusCode(200).forPort(PORT)
.withStartupTimeout(Duration.ofSeconds(STARTUP_TIMEOUT)))
.withExposedPorts(EXPOSED_PORT)
.withLogConsumer(new Slf4jLogConsumer(log))
// next line maybe specific to my setup: Mount Spring Boot test config into container
.withClasspathResourceMapping("application-test.yml", "/etc/config/application.yml",
BindMode.READ_ONLY);
private WebappContainer() {
}
public static WebappContainer getInstance() {
return instance;
}
public void start() {
container.start();
}
public void stop() {
container.stop();
}
public boolean isRunning() {
return container.isRunning();
}
public int getHttpPort() {
return container.getMappedPort(EXPOSED_PORT);
}
}
For completeness sake here the static helper method to get the image
public static String getWebappImageVersion() {
// this may be set by Jenkins to a specific image tag
String imageVersion = System.getenv("WEBAPP_IMAGE_VERSION");
return StringUtils.isNotBlank(imageVersion) ? imageVersion : "latest";
}
The class which defines the headless chrome container:
public class ChromeWebDriverContainer {
private static final ChromeWebDriverContainer instance = new ChromeWebDriverContainer();
public static final BrowserWebDriverContainer<?> chrome = new BrowserWebDriverContainer<>()
.withCapabilities(chromeOptions());
private ChromeWebDriverContainer() {
}
// had to set these options or else the strangest errors appeared
// while starting the container
private static Capabilities chromeOptions() {
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless", "--no-sandbox", "--disable-dev-shm-usage");
return chromeOptions;
}
public static ChromeWebDriverContainer getInstance() {
return instance;
}
public void start() {
chrome.start();
}
public void stop() {
chrome.stop();
}
public boolean isRunning() {
return chrome.isRunning();
}
public int getHttpPort() {
return chrome.getMappedPort(4444);
}
public RemoteWebDriver getRemoteWebDriver() {
return chrome.getWebDriver();
}
}
An example test implementation which uses the containers defined above:
@Cucumber
public class SimpleTest {
private RemoteWebDriver driver;
@Given("^Chrome is running$")
public void chrome_is_running() {
ChromeWebDriverContainer.getInstance().start();
this.driver = ChromeWebDriverContainer.getInstance().getRemoteWebDriver();
}
@Given("^Webapp is running$")
public void webapp_is_running() {
WebappContainer.getInstance().start();
}
@When("^I visit the webapp start page$")
public void visit_the_webapp_start_page() {
// this line is quite important. The chrome container needs access to the webapp container,
// which exposes a port on the docker host
// see NetworkUtils snippet below
driver.get("http://" + NetworkUtils.determineLocalIpAddress() + ":"
+ WebappContainer.getInstance().getHttpPort());
}
@When("^I klick 'Send'$")
public void klick_send() throws InterruptedException {
driver.findElement(By.cssSelector("some css button selector")).click();
}
@Then("a message with title {word} should appear")
public void errormessage_should_appear(String title) {
// it may take some time for the modal dialogue to appear, so wait for it
WebDriverWait wait = new WebDriverWait(ChromeWebDriverContainer.getInstance().getWebDriver(), 30);
wait.until(ExpectedConditions.textToBe(By.cssSelector(".modal-title"), title));
}
}
How to determine the docker hosts ip address:
@SneakyThrows
public static String determineLocalIpAddress() {
try (final DatagramSocket socket = new DatagramSocket()) {
// it doesn't matter that this external ip may not be reachable...but at least
// the socket will be opened through the default gateway and now we
// can be quite sure that this is the right network interface
socket.connect(InetAddress.getByName("8.8.8.8"), 5000);
return socket.getLocalAddress().getHostAddress();
} catch (UnknownHostException | SocketException e) {
try {
// fallback
return InetAddress.getLocalHost().getHostAddress();
} catch (UnknownHostException e1) {
log.error("unable to determine local ip address", e);
}
}
return null;
}
And finally stitching it all together with this feature definition
Feature: My simple webapp feature
Scenario: I want to send some data
Given Webapp is running
Given Chrome is running
When I visit the webapp start page
And I klick 'Send'
Then a message with title Success should appear
04.03.2021
The class to test
@Configuration
@ConfigurationProperties(prefix = "scheduler")
@Data
public class SchedulerConfig {
private String rate;
private final Activetime activetime = new Activetime();
@Data
public static class Activetime {
private int starts;
private int ends;
}
}
The unit test
class SchedulerConfigTest {
private final ApplicationContextRunner contextRunner = new ApplicationContextRunner();
@EnableConfigurationProperties(SchedulerConfig.class)
static class DummyConfigurationProps {
}
@Test
void shouldConfigureScheduler() {
this.contextRunner
.withUserConfiguration(DummyConfigurationProps.class)
.withPropertyValues("scheduler.rate=42","scheduler.activetime.starts=6","scheduler.activetime.ends=22")
.run(context -> {
SchedulerConfig schedulerConfig = context.getBean(SchedulerConfig.class);
assertThat(schedulerConfig).isNotNull();
assertThat(schedulerConfig.getRate()).isEqualTo("42");
assertThat(schedulerConfig.getActivetime().getStarts()).isEqualTo(6);
assertThat(schedulerConfig.getActivetime().getEnds()).isEqualTo(22);
});
}
}
04.03.2021
What I wanted to achieve:
For my setup I use 1 Raspi 4 and 3 Raspi 3B.
The Raspi 4 serves as master node, the others as client nodes.
First I flashed HypriotOS on all 4 Raspis. Before commencing don't forget to:
sudo apt-get update && sudo apt-get upgrade
I recommend to install flannel on all Raspis:
sudo apt-get install flannel
Edit /boot/cmdline.txt and add this: cgroup_enable=memory, do a reboot.
I use K3S, which is a somewhat 'lightweight' distribution suitable for Raspis.
The installation has been made quite easy with k3sup.
Follow the instructions there. It should be no hassle.
The token which will be needed to add other Raspis as nodes to the cluster can be found on the master under
/var/lib/rancher/k3s/server/node-token. Copy it and execute on other nodes:
curl -sfL https://get.k3s.io | K3S_URL=https://master:6443 K3S_TOKEN=mastertoken sh -
With sudo journalctl -f executed on the node the join process with master should be visible.
07.02.2021
What I wanted to achieve was:
1. Pull a Spring Boot demo-app image from Dockerhub.
2. Install that image to Minishift by using Helm.
Step 1: Pull image from Dockerhub
docker pull kharulp12/spring_hello_rest_api
Step 2: Export Minishifts registry url to a variable for easy re-use
export registry=$(minishift openshift registry)
Step 3: Tag the image to be used by Minishifts internal image registry. Note the tags version, it is referenced in Helm chart later!
docker tag a54f676e $registry/myproject/springbootapp:1.16.0
Step 4: Push that image to Minishifts registry
docker push $registry/myproject/springbootapp:1.16.0
Step 5: Initialize a Helm chart
helm create springbootapp
Step 6: Edit values.yml
image:
repository: 172.30.1.1:5000/myproject/springbootapp
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
Step 7: Install the chart
helm install springbootapp .
02.09.2020
I fiddled with persistent volume claims (pvc) on OpenShift.
Creating a pvc was no problem, but afterwards I tried to delete it but it was stuck in "Terminating"-state.
Here's what I did to remove it:
# Login to OpenShift, this can be obtained in web console with 'Copy Login Command'
$ oc login --token=41cxWS0NnARW2zxRCK5p2GQb31VNf7zEz-wuYMdhw1k --server=https://openshift.cluster.host:6443
# Create a pvc
$ oc set volume dc/testpvc --add --type pvc --claim-size=100Mi
info: Generated volume name: volume-s9njq
deploymentconfig.apps.openshift.io/testpvc volume updated
# Check the status
$ oc get pvc -w
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-gpvft Bound pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8 1Gi RWO gp2 15s
# Try to delete it...
$ oc delete pvc/pvc-gpvft
persistentvolumeclaim "pvc-gpvft" deleted
# Check status...it's stuck in 'Terminating'
$ oc get pvc -w
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-gpvft Terminating pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8 1Gi RWO gp2 8m29s
# Check deployment...the finalizer is the interesting part
$ oc get pvc pvc-gpvft -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
(...)
finalizers:
- kubernetes.io/pvc-protection
(...)
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: gp2
volumeMode: Filesystem
volumeName: pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound
# Patch the finalizer
$ oc patch pvc pvc-gpvft -p '{"metadata":{"finalizers": []}}' --type=merge
persistentvolumeclaim/pvc-gpvft patched
# Check again...aaaand it's gone
$ oc get pvc
No resources found in test-space namespace.
18.08.2020
<!-- Retrofit pom with JKube-Plugin -->
<build>
...
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>openshift-maven-plugin</artifactId>
<version>${jkube.openshift.version}</version>
</plugin>
...
</build>
# Login to OpenShift, this can be obtained in web console with 'Copy Login Command'
$ oc login --token=41cxWS0NnARW2zxRCK5p2GQb31VNf7zEz-wuYMdhw1k --server=https://openshift.cluster.host:6443
# Build and deploy to OpenShift
$ mvn oc:build oc:resource oc:apply
# Watch the deployment
$ oc get pods -w
# Find out route
$ oc get routes
# Undeploy everything
$ mvn oc:undeploy
# As an alternative, remove everything related to this deployment
$ oc delete all --selector app=myapplabel
13.08.2020
@Component
@Slf4j
public class ScheduledConsumer {
private final BusinessService businessService;
private final KafkaListenerEndpointRegistry registry;
@Autowired
public ScheduledConsumer(final BusinessService businessService, KafkaListenerEndpointRegistry registry) {
this.businessService = businessService;
this.registry = registry;
}
@Scheduled(cron = "${cron-expression.start}")
public void resumeConsuming() {
this.registry.getListenerContainers().forEach(MessageListenerContainer::resume);
log.info("Resume consuming business objects...");
}
@Scheduled(cron = "${cron-expression.stop}")
public void pauseConsuming() {
this.registry.getListenerContainers().forEach(MessageListenerContainer::pause);
log.info("Pause consuming business objects...");
}
@KafkaListener(id = "mycontainer", topics = "${topic}", autoStartup = "${consumer.autostart}")
public void consume(final ConsumerRecord<String, BusinessObject> businessRecord,
final Acknowledgment acknowledgment) {
log.info("Processing business object...");
this.businessService.process(businessRecord.value());
acknowledgment.acknowledge();
}
}
19.04.2020
# create branch from develop
git pull
git checkout -b feature/my-feature
# work on branch and commit work. repeat until work is done
git commit -am "implement my feature"
git push
# squash commits with an interactive rebase
# first, choose all commits to squash into one
git log --oneline
# second, squash the last x commits
# pick the one to squash into (usually the topmost), squash
# the others
git rebase -i HEAD~x
# forcepush these changes to the remote branch
git push --force
# refresh work done by others on develop in the meantime...
git checkout develop
git pull
# change back to feature branch and rebase
git checkout feature/my-feature
git rebase develop
# optional: resolve conflicts and then
git add .
git rebase --continue
# since we rewrote history, force push changes to branch
git push --force-with-lease
# after that, branch can be merged into develop
09.04.2020
I run a dockerized Spring Boot Application on a Raspberry Pi Zero (yes...that's possible).
It records the current temperatures to a MariaDB (which is running on another RPi).
At some point I noticed that the time stamps in the database had a time difference of exactly minus two hours. However, the docker host had the correct time zone (CEST, Europe/Berlin). The running Docker Container had the UTC timezone, though:
$ docker exec 9106cb56b3f6 date
Thu Apr 4 20:00:00 UTC 2020
On the docker host, it was already 22:00h, but the timezone was different.
$ date
Thu Apr 4 22:00:00 CEST 2020
This is where the time difference came from. The solution (for me) was to set the TZ environment variable to the correct timezone "Europe/Berlin" when building the docker image with the Maven Jib plugin:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>1.7.0</version>
<configuration>
(...stuff omitted for brevity...)
<container>
<environment>
<TZ>Europe/Berlin</TZ>
</environment>
</container>
</configuration>
</plugin>
03.04.2020
import org.springframework.http.ResponseEntity;
import org.springframework.security.core.Authentication;
import org.springframework.security.oauth2.core.oidc.user.OidcUser;
import org.springframework.security.web.authentication.logout.LogoutHandler;
import org.springframework.stereotype.Component;
import org.springframework.web.client.RestTemplate;
import org.springframework.web.util.UriComponentsBuilder;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import lombok.extern.slf4j.Slf4j;
@Slf4j
@Component
public class KeycloakLogoutHandler implements LogoutHandler {
private final RestTemplate restTemplate = new RestTemplate();
@Override
public void logout(HttpServletRequest request, HttpServletResponse response, Authentication authentication) {
OidcUser user = (OidcUser) authentication.getPrincipal();
String endSessionEndpoint = user.getIssuer() + "/protocol/openid-connect/logout";
UriComponentsBuilder builder = UriComponentsBuilder
.fromUriString(endSessionEndpoint)
.queryParam("id_token_hint", user.getIdToken().getTokenValue());
ResponseEntity<String> logoutResponse = restTemplate.getForEntity(builder.toUriString(), String.class);
if (!logoutResponse.getStatusCode().is2xxSuccessful()) {
log.error("Unable to logout user");
}
}
}
02.04.2020
<dependency>
<groupId>org.webjars</groupId>
<artifactId>webjars-locator</artifactId>
</dependency>
<script
th:src="@{webjars/jquery/jquery.min.js}"
type="text/javascript"
></script>