Changes include transformers adapter improvements, new config options: & utils:
- support inference via transformers.pipeline
- init_params.always_clear_mem: gc.collect() & torch.cuda.empty_cache() before inference
- init_params.gradient_checkpointing: bool
- init_params.pipeline_task: str
- init_params.use_pipeline: bool
- possibility to configure transformers inference function by name (str)
- utils: ui.black() added
- bugfix: prevent config validation errors for ApiType.NONE
- config validation tests, Qwen1.5-1.8B-Chat added to local transformers tests
**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.3.1...v3.4.0