Hi Hernán
Thanks for your feedback

2015-08-24 8:13 GMT+02:00 Hernán Morales Durand <hernan.mora...@gmail.com>:

> Nicolai
>
> 2015-08-23 8:44 GMT-03:00 Nicolai Hess <nicolaih...@web.de>:
>
>> For those who had problems with pharo on windows and github based
>> repositories,
>> I built a windows vm with support for long paths:
>>
>>
>> https://drive.google.com/file/d/0B8yEahnuIem2bmxwdzJuUXFxVGM/view?usp=sharing
>>
>>
>> For browsing directories with large paths (FileList or Inspect),
>> you may need one additional change in the image (But I am not really sure
>> about that) :
>>
>> DiskStore>>initialize
>>     super initialize.
>>     maxFileNameLength := Smalltalk vm maxFilenameLength ifNil: [ 32767 ].
>>
>>
>> please test and give feedback.
>>
>>
> Using Win 8.1, I confirm FileList tree now is reading directories beyond
> 260 limit, without need to #initialize DiskStore.
>
> However there should be another concern with GitFileTree uncompression
> because loading Aconcagua/Chalten, etc with your new VM in a directory name
> with more characters, still signals File not found :(
>

I tried Aconcagua from Catalogbrowser in pharo 5.0 and it worked for me.
Can you show what path is the base path, the working directory of your
pharo package-cache/github directory. Is there something special in your
path name or the disk?


>
> Is the VM using the "Unicode-aware API"? According to this guys
> http://serverfault.com/questions/163419/window-256-characters-path-name-limitation
> such Win Unicode API would let VM to use the 32767 bricks
>

I am not sure, we are using all file/directory operations that should use
the unicode version with multibyte char (CreateFileW / FindFirstFileW / ...)


>
>
> This wasn't as easy as I thought, and I had to make some more changes
>> for the file permissions (the stat-functions don't work for files with
>> long paths).
>> Please test other file/folder operations.
>>
>>
> Thank you Nicolai for your effort, I know it's hard to dive into depths
> with C.
>
> Hernán
>
>
>> nicolai
>>
>>
>>
>>
>

Reply via email to