Given that one has enabled encryption for a bucket, existent files in the bucket will not get encrypted. AWS suggest one to recursively copy over the files using the aws cli as
aws s3 cp s3://awsexamplebucket/ s3://awsexamplebucket/ --recursive
which works perfectly if you have not enabled bucket versioning since it will replace the old file and use the default encryption to write the new file (one could specify the KMS key). Having versioning enabled the cp command won't replace the existing files but create a new version so one version will be non-encrypted while the new version will be encrypted. To address this one could use lifecycle rules to expire and delete old versions.
Another approach is to remove all old versions of the objects and only keep the latest version (the encrypted version) i.e. run the aws s3 cp command and then remove the old versions of the objects. This can be done using the following python script using boto3
import boto3
profile_name=""
boto3.setup_default_session(profile_name=profile_name)
client = boto3.client('s3')
bucket = "bucket_to_remove_old_versions"
# init args
args = {"Bucket": bucket, "MaxKeys": 1000}
while True:
items = client.list_object_versions(**args)
for version in items['Versions']:
if not version['IsLatest']:
# Select only versions which is not latest
version_id = version['VersionId'] # This is the version which is old and need to rm
key = version['Key']
# Remove the version which is not latest
response = client.delete_object(Bucket=bucket, Key=key, VersionId=version_id)
if not items['IsTruncated']:
# When we recive a list which are not truncated we have the last keys in the bucket
break
# Add the continuation marker to fetch new objs
args = {"Bucket": bucket, "KeyMarker": items['NextKeyMarker'], "MaxKeys": 1000}