lidavidm commented on pull request #12099:
URL: https://github.com/apache/arrow/pull/12099#issuecomment-1007667064
Surprisingly it is the I/O thread pool:
```
(gdb) info thread
Id Target Id Frame
* 1 Thread 0x7f8856a04740 (LWP 27248) "python" 0x00007f88565f5ad3 in
futex_wait_cancelable (private=<optimized out>,
expected=0, futex_word=0x5622ee0e97e8) at
../sysdeps/unix/sysv/linux/futex-internal.h:88
...snip...
29 Thread 0x7f879e7fc700 (LWP 27276) "Io-7" 0x00007f88565f5ad3 in
futex_wait_cancelable (private=<optimized out>,
expected=0, futex_word=0x7f87900051e0) at
../sysdeps/unix/sysv/linux/futex-internal.h:88
30 Thread 0x7f879d3ff700 (LWP 27277) "jemalloc_bg_thd"
0x00007f88565f5ad3 in futex_wait_cancelable (
private=<optimized out>, expected=0, futex_word=0x7f87cd40a794) at
../sysdeps/unix/sysv/linux/futex-internal.h:88
31 Thread 0x7f878ffff700 (LWP 27278) "jemalloc_bg_thd"
0x00007f88565f5ad3 in futex_wait_cancelable (
private=<optimized out>, expected=0, futex_word=0x7f87cd40a864) at
../sysdeps/unix/sysv/linux/futex-internal.h:88
(gdb) t 29
[Switching to thread 29 (Thread 0x7f879e7fc700 (LWP 27276))]
#0 0x00007f88565f5ad3 in futex_wait_cancelable (private=<optimized out>,
expected=0, futex_word=0x7f87900051e0)
at ../sysdeps/unix/sysv/linux/futex-internal.h:88
88 in ../sysdeps/unix/sysv/linux/futex-internal.h
(gdb) bt
#0 0x00007f88565f5ad3 in futex_wait_cancelable (private=<optimized out>,
expected=0, futex_word=0x7f87900051e0)
at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x7f8790005190,
cond=0x7f87900051b8) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x7f87900051b8, mutex=0x7f8790005190) at
pthread_cond_wait.c:655
#3 0x00007f87cdd374ad in __gthread_cond_wait (
__mutex=<error reading variable: dwarf2_find_location_expression:
Corrupted DWARF expression.>,
__cond=<optimized out>)
at
/home/conda/feedstock_root/build_artifacts/gcc_compilers_1628138005912/work/build/x86_64-conda-linux-gnu/libstdc++-v3/src/c++11/condition_variable.cc:865
#4 std::__condvar::wait (__m=<error reading variable:
dwarf2_find_location_expression: Corrupted DWARF expression.>,
this=<optimized out>) at
../../../../../libstdc++-v3/src/c++11/gthr-default.h:155
#5 std::condition_variable::wait (this=<optimized out>, __lock=...)
at ../../../../../libstdc++-v3/src/c++11/condition_variable.cc:41
#6 0x00007f87d1420c11 in
std::condition_variable::wait<arrow::fs::(anonymous
namespace)::ObjectOutputStream::Flush()::{lambda()#1}>(std::unique_lock<std::mutex>&,
arrow::fs::(anonymous namespace)::ObjectOutputStream::Flush()::{lambda()#1}) (
this=0x7f87900051b8, __lock=..., __p=...)
at
/usr/lib/gcc/x86_64-linux-gnu/7.5.0/../../../../include/c++/7.5.0/condition_variable:99
#7 0x00007f87d141ba1e in arrow::fs::(anonymous
namespace)::ObjectOutputStream::Flush (this=0x7f879000add0)
at
/home/lidavidm/Code/upstream/arrow-15265/cpp/src/arrow/filesystem/s3fs.cc:1301
#8 0x00007f87d141bfe0 in arrow::fs::(anonymous
namespace)::ObjectOutputStream::Close (this=0x7f879000add0)
at
/home/lidavidm/Code/upstream/arrow-15265/cpp/src/arrow/filesystem/s3fs.cc:1218
#9 0x00007f87d141c66d in virtual thunk to arrow::fs::(anonymous
namespace)::ObjectOutputStream::Close() ()
at
/usr/lib/gcc/x86_64-linux-gnu/7.5.0/../../../../include/c++/7.5.0/bits/hashtable.h:492
#10 0x00007f87c7d7d907 in arrow::dataset::FileWriter::Finish
(this=0x7f8788007d80)
at
/home/lidavidm/Code/upstream/arrow-15265/cpp/src/arrow/dataset/file_base.cc:322
#11 0x00007f87c7d59040 in arrow::dataset::internal::(anonymous
namespace)::DatasetWriterFileQueue::DoFinish (
this=0x7f87b000b5f0) at
/home/lidavidm/Code/upstream/arrow-15265/cpp/src/arrow/dataset/dataset_writer.cc:221
#12 0x00007f87c7d58ee6 in arrow::dataset::internal::(anonymous
namespace)::DatasetWriterFileQueue::DoDestroy()::{lambda()#1}::operator()()
const (this=0x7f87b00088c8)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]