Monday 4 October 2021

Bitbucket pipelines for java artifacts and uploading to AWS s3

 I am new to bitbucket and for whatever reason trying to create a pipeline that builds a java artifact & publishing to an s3 bucket has proved a lot harder than it should.


There were various gotchas.

  1. artifacts created in one step are not visible in the next
  2. the aws-s3-deploy step insisted on uploading absolutely everything
Anyway, I took the weekend off & resolved my remaining issues in about 5 minutes. Isn't always the way. So here is the bitbucket-pipelines.yml file I have ended up with.



#  Template maven-build

# This template allows you to test and build your Java project with Maven.
# The workflow allows running tests, code checkstyle and security scans on the default branch.

# Prerequisites: pom.xml and appropriate project structure should exist in the repository.

image: maven:3.6.3

pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- maven
script:
- mvn -B verify --file pom.xml package
artifacts:
- $BITBUCKET_CLONE_DIR/target/v*
after-script:
- mkdir artifacts_for_deploy
- mv target/v*war artifacts_for_deploy
- pipe: atlassian/aws-s3-deploy:1.1.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: 'us-west-1'
S3_BUCKET: $AWS_IFS_S3_ARTIFACT_BUCKET
LOCAL_PATH: 'artifacts_for_deploy'
COMMAND: 'upload'



- step:
name: checkstyle
script:
- pipe: atlassian/checkstyle-report:0.2.0
- step:
name: Security Scan
script:
# Run a security scan for sensitive data.
# See more security tools at https://bitbucket.org/product/features/pipelines/integrations?&category=security
- pipe: atlassian/git-secrets-scan:0.4.3

Monday 12 April 2021

JDBI v3 and BatchChunkSize annotation, Only inserting up to the size of the BatchChunkSize

 Well hasn't it been a long time.


We recently upgraded to JDBI v3 and as part of that came across an interesting problem. The issue was that with v3 the insert was only inserting up to the number of rows defined in the annotation. So if the annotation was set at 50 you would get a max of 50 rows inserted.


in v2 we had a method on an interface defined as :

@GetGeneratedKeys 

@SqlBatch("Insert into my_table (......)

@BatchChunkSize(50)

void insertAsBatch(@BindBean("r") List<Foo> foos);


NB: in v2 this returned void


So innocently & clearly incorrectly I changed this to return 'int' assuming that would be the number of rows returned. Note that stupidly we had no test for this.


Users then found that we were only able to insert 50 rows.


Writing a test I then found that the value returned was the ID of the first row inserted. The fix was very simple. Change the return to an int[] IE this:

@GetGeneratedKeys 

@SqlBatch("Insert into my_table (......)

@BatchChunkSize(50)

int[] insertAsBatch(@BindBean("r") List<Foo> foos);