On 08.10.2012 15:50, alexdeucher at gmail.com wrote:
> From: Alex Deucher <alexander.deucher at amd.com>
>
> The actual set up and assignment of VM page tables
> is done on the fly in radeon_gart.c.
>
> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
One comment below, apart from that it's:

Reviewed-by: Christian K?nig <christian.koenig at amd.com>

> ---
>   drivers/gpu/drm/radeon/ni.c            |    4 ++++
>   drivers/gpu/drm/radeon/radeon_device.c |    3 +++
>   drivers/gpu/drm/radeon/si.c            |    7 ++++---
>   3 files changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
> index 48e2337..cfb9276 100644
> --- a/drivers/gpu/drm/radeon/ni.c
> +++ b/drivers/gpu/drm/radeon/ni.c
> @@ -770,6 +770,10 @@ static int cayman_pcie_gart_enable(struct radeon_device 
> *rdev)
>       WREG32(0x15DC, 0);
>   
>       /* empty context1-7 */
> +     /* Assign the pt base to something valid for now; the pts used for
> +      * the VMs are determined by the application and setup and assigned
> +      * on the fly in the vm part of radeon_gart.c
> +      */
>       for (i = 1; i < 8; i++) {
>               WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
>               WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), 0);
> diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
> b/drivers/gpu/drm/radeon/radeon_device.c
> index 64a4264..1fad47e 100644
> --- a/drivers/gpu/drm/radeon/radeon_device.c
> +++ b/drivers/gpu/drm/radeon/radeon_device.c
> @@ -1018,6 +1018,9 @@ int radeon_device_init(struct radeon_device *rdev,
>               return r;
>       /* initialize vm here */
>       mutex_init(&rdev->vm_manager.lock);
> +     /* FIXME start with 4G, once using 2 level pt switch to full
> +      * vm size space
> +      */
Even with 2 level pt I don't think we want to switch to the full vm size 
space.

Having 40bits address space minus 12bits offsets gives us 28bits for the 
page directory/page table, assuming a 50/50 split between them we would 
get 2 ^ 14 * 8 = 128kb for each page, which in my eyes is a bit much 
(and that gets even worth when we get 48bits address space.....)

I would rather say to make max_pfn depend on the available memory, so 
that a single application can at least map all VRAM + GART memory.

>       rdev->vm_manager.max_pfn = 1 << 20;
>       INIT_LIST_HEAD(&rdev->vm_manager.lru_vm);
>   
> diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
> index c76825f..f272ead 100644
> --- a/drivers/gpu/drm/radeon/si.c
> +++ b/drivers/gpu/drm/radeon/si.c
> @@ -2407,12 +2407,13 @@ static int si_pcie_gart_enable(struct radeon_device 
> *rdev)
>       WREG32(0x15DC, 0);
>   
>       /* empty context1-15 */
> -     /* FIXME start with 4G, once using 2 level pt switch to full
> -      * vm size space
> -      */
>       /* set vm size, must be a multiple of 4 */
>       WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
>       WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
> +     /* Assign the pt base to something valid for now; the pts used for
> +      * the VMs are determined by the application and setup and assigned
> +      * on the fly in the vm part of radeon_gart.c
> +      */
>       for (i = 1; i < 16; i++) {
>               if (i < 8)
>                       WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),

Reply via email to