[issue38039] Segfault when pickling dictionary with large pandas dataframes

2022-01-23 Thread Irit Katriel
Change by Irit Katriel : -- resolution: -> third party stage: -> resolved status: open -> closed ___ Python tracker ___ ___

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2021-09-20 Thread Irit Katriel
Irit Katriel added the comment: Also, if it’s only happening with pandas, is it any large pandas frame or only specific ones. Are you able to provide a complete script that builds the frame and pickles the dictionary and which reproduces the crash? --

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2021-09-19 Thread Irit Katriel
Irit Katriel added the comment: Have you been able to reproduce this without pandas? If not, have you reported this problem in the pandas but tracker? -- nosy: +iritkatriel ___ Python tracker

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2019-09-06 Thread Ilya Valmianski
Ilya Valmianski added the comment: As a sizing clarification, timed_dfs ~ 150GB, control_features ~30 GB, notime_dfs ~ 2GB. -- ___ Python tracker ___

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2019-09-06 Thread Ilya Valmianski
Ilya Valmianski added the comment: Below is the code. It segfaults with either dill or pickle on 3.6 and 3.7. with open(output_path,'wb') as fout: dill.dump({ 'timed_dfs': timed_dfs, #large pandas dataframe with all but one columns being

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2019-09-05 Thread Eric V. Smith
Eric V. Smith added the comment: Can you provide the code that caused the segfault? -- nosy: +eric.smith ___ Python tracker ___

[issue38039] Segfault when pickling dictionary with large pandas dataframes

2019-09-05 Thread Ilya Valmianski
New submission from Ilya Valmianski : Tried pickling a dictionary with multiple pandas tables and python primitive types. Pandas tables are large so full object size is ~200GB but system should not be OOM (crashed with ~300 GB system memory available). Reproduced on two machines running RHEL