The field is marked as "the offset in the file (in clusters)", but it
was being used like this
`cluster_size*(nums)+mapping->info.file.offset`, which is incorrect.
Additionally, removed the `abort` when `first_mapping_index` does not
match, as this matches the case when adding new clusters for fil
Before this commit, the behavior when calling `commit_one_file` for
example with `offset=0x2000` (second cluster), what will happen is that
we won't fetch the next cluster from the fat, and instead use the first
cluster for the read operation.
This is due to off-by-one error here, where `i=0x2000
Added several tests to verify the implementation of the vvfat driver.
We needed a way to interact with it, so created a basic `fat16.py` driver that
handled writing correct sectors for us.
Added `vvfat` to the non-generic formats, as its not a normal image format.
Signed-off-by: Amjad Alsharafi
These patches fix some bugs found when modifying files in vvfat.
First, there was a bug when writing to the cluster 2 or above of a file, it
will copy the cluster before it instead, so, when writing to cluster=2, the
content of cluster=1 will be copied into disk instead in its place.
Another issue
When reading with `read_cluster` we get the `mapping` with
`find_mapping_for_cluster` and then we call `open_file` for this
mapping.
The issue appear when its the same file, but a second cluster that is
not immediately after it, imagine clusters `500 -> 503`, this will give
us 2 mappings one has th
On Fri, May 31, 2024 at 07:22:49PM +0200, Kevin Wolf wrote:
> Am 26.05.2024 um 11:56 hat Amjad Alsharafi geschrieben:
> > These patches fix some bugs found when modifying files in vvfat.
> > First, there was a bug when writing to the cluster 2 or above of a file, it
> > will copy the cluster before
* Michael Galaxy (mgal...@akamai.com) wrote:
> One thing to keep in mind here (despite me not having any hardware to test)
> was that one of the original goals here
> in the RDMA implementation was not simply raw throughput nor raw latency,
> but a lack of CPU utilization in kernel
> space due to t
PF initializes SR-IOV VF BAR0 region in nvme_init_sriov() with bar_size
calcaulted by Primary Controller Capability such as VQFRSM and VIFRSM
rather than `max_ioqpairs` and `msix_qsize` which is for PF only.
In this case, the bar size reported in nvme_init_sriov() by PF and
nvme_init_pci() by VF m
From: Li Feng
When the vhost-user is reconnecting to the backend, and if the vhost-user fails
at the get_features in vhost_dev_init(), then the reconnect will fail
and it will not be retriggered forever.
The reason is:
When the vhost-user fail at get_features, the vhost_dev_cleanup will be calle
From: Li Feng
This reverts commit f02a4b8e6431598612466f76aac64ab492849abf.
Since the current patch cannot completely fix the lost reconnect
problem, there is a scenario that is not considered:
- When the virtio-blk driver is removed from the guest os,
s->connected has no chance to be set to f
From: Jonah Palmer
Add support for the VIRTIO_F_NOTIFICATION_DATA feature across a variety
of vhost devices.
The inclusion of VIRTIO_F_NOTIFICATION_DATA in the feature bits arrays
for these devices ensures that the backend is capable of offering and
providing support for this feature, and that i
On 6/3/24 09:21, Eric Blake wrote:
The following changes since commit 3b2fe44bb7f605f179e5e7feb2c13c2eb3abbb80:
Merge tag 'pull-request-2024-05-29' ofhttps://gitlab.com/thuth/qemu into
staging (2024-05-29 08:38:20 -0700)
are available in the Git repository at:
https://repo.or.cz/qemu/e
Before this commit, scsi-disk accepts a string of arbitrary length for
its "serial" property. However, the value visible on the guest is
actually truncated to 36 characters. This limitation doesn't come from
the SCSI specification, it is an arbitrary limit that was initially
picked as 20 and later
Am 04.06.2024 um 09:58 hat Fiona Ebner geschrieben:
> Am 03.06.24 um 18:21 schrieb Kevin Wolf:
> > Am 03.06.2024 um 16:17 hat Fiona Ebner geschrieben:
> >> Am 26.03.24 um 13:44 schrieb Kevin Wolf:
> >>>
> >>> The fix for bdrv_flush_all() is probably to make it bdrv_co_flush_all()
> >>> with a corou
On Wed, Feb 28, 2024 at 08:33:11PM +0900, Akihiko Odaki wrote:
> I submitted a RFC series[1] to add support for SR-IOV emulation to
> virtio-net-pci. During the development of the series, I fixed some
> trivial bugs and made improvements that I think are independently
> useful. This series extracts
On Mon, Jun 03, 2024 at 11:42:35AM GMT, Markus Armbruster wrote:
Stefano Garzarella writes:
On Wed, May 29, 2024 at 04:50:20PM GMT, Markus Armbruster wrote:
Stefano Garzarella writes:
shm_open() creates and opens a new POSIX shared memory object.
A POSIX shared memory object allows creatin
ping 2
On 3/15/24 09:58, Alexander Ivanov wrote:
If a block device is an LVM logical volume we can resize it using
standard LVM tools.
Add a helper to detect if a device is a DM device. In raw_co_truncate()
check if the block device is DM and resize it executing lvresize.
Signed-off-by: Alexan
Signed-off-by: Fiona Ebner
---
An alternative would be to detect whether the argument list is 'void'
in FuncDecl's __init__, assign the empty list to self.args there and
special case based on that in the rest of the code.
Not super happy about the introduction of the 'void_value' parameter,
but
Am 03.06.24 um 18:21 schrieb Kevin Wolf:
> Am 03.06.2024 um 16:17 hat Fiona Ebner geschrieben:
>> Am 26.03.24 um 13:44 schrieb Kevin Wolf:
>>>
>>> The fix for bdrv_flush_all() is probably to make it bdrv_co_flush_all()
>>> with a coroutine wrapper so that the graph lock is held for the whole
>>> fu
19 matches
Mail list logo