* Issue starts after upgrading to Quincy (17.2.6). * Problems starts for objects bigger than 5 GB (multipart limit) "Aws::S3::Errors::EntityTooLarge (Aws::S3::Errors::EntityTooLarge)"Īfter some tests we have following findings: We can't copy bigger objects anymore via S3. The discussions in this GitHub project are for Ceph-CSI, which is _only_ a driver to provision/mount Ceph based storage.Īfter upgrading our cluster from Nautilus -> Pacific -> Quincy we noticed The best venue to ask about guidance and experience from others is at. Or, for CephFS, you may want to look into FS-Cache (I don't know if CephFS supports that though). Maybe it is possible to tune RBD for working with dm-cache and network interruptions. Someone needs to decide which side of the data is the right one. In case the data was changed remotely, and something wrote to the local dm-cache device, a split-brain can happen. There is no guarantee that data has not been modified on an other system once the network comes back after a temporary interruption. What's the best practices of accessing ceph over flaky network connection? For example, can I setup a local dm-cache binding ceph with a local SSD to buffer the I/O? Thanks.Ī flaky network will usually be quite problematic.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |