Prefacing that I'm using kohya_ss's Dreambooth LoRA to train.
To get better results for objects:
- As much as system memory allows, use as small of a batch size as possible (I used 1)
- A low learning rate (something like 0.000002) + more epochs can provide more fine-grained checkpoints in between
- Denoise the training images before using them
- Use a generic checkpoint for training (base sd1.5 or anylora works pretty well IMO)