[ 
https://issues.apache.org/jira/browse/IMPALA-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17741057#comment-17741057
 ] 

Gergely Fürnstáhl edited comment on IMPALA-12233 at 7/7/23 2:54 PM:
--------------------------------------------------------------------

Nevermind, I was checking wrong process

-I started to dig into this, but I am not sure it's the cyclic barrier, I added 
logs to it and all the cycle finishes correctly. When we try to open the final 
Aggregator node, that's where we get stuck-


was (Author: JIRAUSER283863):
I started to dig into this, but I am not sure it's the cyclic barrier, I added 
logs to it and all the cycle finishes correctly. When we try to open the final 
Aggregator node, that's where we get stuck:
{code:java}
Thread 1167 (Thread 0x7fc917d02700 (LWP 3017722) "impalad"): #0  
0x00007fcb78476376 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib/x86_64-linux-gnu/libpthread.so.0 #1  0x00007fcb767996cc in 
__gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at 
/mnt/source/gcc/build-10.4.0/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
 #2  std::condition_variable::wait (this=<optimized out>, __lock=...) at 
../../../../../gcc-10.4.0/libstdc++-v3/src/c++11/condition_variable.cc:53 #3  
0x00000000027e0636 in 
std::_V2::condition_variable_any::wait<std::unique_lock<impala::SpinLock> > 
(this=0x113f51f8, __lock=...) at 
/home/gfurnstahl/Impala/toolchain/toolchain-packages-gcc10.4.0/gcc-10.4.0/include/c++/10.4.0/condition_variabl
 e:324 #4  0x00000000027d7b13 in 
impala::KrpcDataStreamRecvr::SenderQueue::GetBatch (this=0x113f51d0, 
next_batch=0xeedad60) at 
/home/gfurnstahl/Impala/be/src/runtime/krpc-data-stream-recvr.cc:251 #5  
0x00000000027de6fd in impala::KrpcDataStreamRecvr::GetBatch (this=0x132734a0, 
next_batch=0xeedad60) at 
/home/gfurnstahl/Impala/be/src/runtime/krpc-data-stream-recvr.cc:818 #6  
0x0000000002f9f0bd in impala::ExchangeNode::FillInputRowBatch (this=0xeedab00, 
state=0x16436480) at /home/gfurnstahl/Impala/be/src/exec/exchange-node.cc:172 
#7  0x0000000002fa05a5 in impala::ExchangeNode::GetNext (this=0xeedab00, 
state=0x16436480, output_batch=0x7fc917d00d80, eos=0x7fc917d00d5f) at 
/home/gfurnstahl/Impala/be/src/exec/exchange-node.cc:233 #8  0x000000000310eb8d 
in impala::AggregationNode::Open (this=0x16436240, state=0x16436480) at 
/home/gfurnstahl/Impala/be/src/exec/aggregation-node.cc:67 #9  
0x00000000028c5a10 in impala::FragmentInstanceState::Open (this=0xf339380) at 
/home/gfurnstahl/Impala/be/src/runtime/fragment-instance-state.cc:426 #10 
0x00000000028c208a in impala::FragmentInstanceState::Exec (this=0xf339380) at 
/home/gfurnstahl/Impala/be/src/runtime/fragment-instance-state.cc:95
 {code}

> Partitioned hash join with a limit can hang when using mt_dop>0
> ---------------------------------------------------------------
>
>                 Key: IMPALA-12233
>                 URL: https://issues.apache.org/jira/browse/IMPALA-12233
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend
>    Affects Versions: Impala 4.3.0
>            Reporter: Joe McDonnell
>            Assignee: Gergely Fürnstáhl
>            Priority: Blocker
>
> After encountering a hung query on an Impala cluster, we were able to 
> reproduce it in the Impala developer environment with these steps:
> {noformat}
> use tpcds;
> set mt_dop=2;
> select ss_cdemo_sk from store_sales where ss_sold_date_sk = (select 
> max(ss_sold_date_sk) from store_sales) group by ss_cdemo_sk limit 1;{noformat}
> The problem reproduces with limit values up to 183, then at limit 184 and 
> higher it doesn't reproduce.
> Taking stack traces show a thread waiting for a cyclic barrier:
> {noformat}
>  0  libpthread.so.0!__pthread_cond_wait + 0x216
>  1  
> impalad!impala::CyclicBarrier::Wait<impala::PhjBuilder::DoneProbingHashPartitions(const
>  int64_t*, impala::BufferPool::ClientHandle*, impala::RuntimeProfile*, 
> std::deque<std::unique_ptr<impala::PhjBuilderPartition> >*, 
> impala::RowBatch*)::<lambda()> > [condition-variable.h : 49 + 0xc]
>  2  impalad!impala::PhjBuilder::DoneProbingHashPartitions(long const*, 
> impala::BufferPool::ClientHandle*, impala::RuntimeProfile*, 
> std::deque<std::unique_ptr<impala::PhjBuilderPartition, 
> std::default_delete<impala::PhjBuilderPartition> >, 
> std::allocator<std::unique_ptr<impala::PhjBuilderPartition, 
> std::default_delete<impala::PhjBuilderPartition> > > >*, impala::RowBatch*) 
> [partitioned-hash-join-builder.cc : 766 + 0x25]
>  3  
> impalad!impala::PartitionedHashJoinNode::DoneProbing(impala::RuntimeState*, 
> impala::RowBatch*) [partitioned-hash-join-node.cc : 1189 + 0x28]
>  4  impalad!impala::PartitionedHashJoinNode::GetNext(impala::RuntimeState*, 
> impala::RowBatch*, bool*) [partitioned-hash-join-node.cc : 599 + 0x15]
>  5  
> impalad!impala::StreamingAggregationNode::GetRowsStreaming(impala::RuntimeState*,
>  impala::RowBatch*) [streaming-aggregation-node.cc : 115 + 0x14]
>  6  impalad!impala::StreamingAggregationNode::GetNext(impala::RuntimeState*, 
> impala::RowBatch*, bool*) [streaming-aggregation-node.cc : 77 + 0x15]
>  7  impalad!impala::FragmentInstanceState::ExecInternal() 
> [fragment-instance-state.cc : 446 + 0x15]
>  8  impalad!impala::FragmentInstanceState::Exec() [fragment-instance-state.cc 
> : 104 + 0xf]
>  9  impalad!impala::QueryState::ExecFInstance(impala::FragmentInstanceState*) 
> [query-state.cc : 956 + 0xf]{noformat}
> Adding some debug logging around locations that go through that cyclic 
> barrier, we see one Impalad where it is expecting two threads and only one 
> arrives:
> {noformat}
> I0621 18:28:19.926551 210363 partitioned-hash-join-builder.cc:766] 
> 2a4787b28425372d:ac6bd96200000004] DoneProbingHashPartitions: 
> num_probe_threads_=2
> I0621 18:28:19.927855 210362 streaming-aggregation-node.cc:136] 
> 2a4787b28425372d:ac6bd96200000003] the number of rows (93) returned from the 
> streaming aggregation node has exceeded the limit of 1
> I0621 18:28:19.928887 210362 query-state.cc:958] 
> 2a4787b28425372d:ac6bd96200000003] Instance completed. 
> instance_id=2a4787b28425372d:ac6bd96200000003 #in-flight=4 status=OK{noformat}
> Other instances that don't have a stuck thread see both threads arrive:
> {noformat}
> I0621 18:28:19.926223 210358 partitioned-hash-join-builder.cc:766] 
> 2a4787b28425372d:ac6bd96200000005] DoneProbingHashPartitions: 
> num_probe_threads_=2
> I0621 18:28:19.926326 210359 partitioned-hash-join-builder.cc:766] 
> 2a4787b28425372d:ac6bd96200000006] DoneProbingHashPartitions: 
> num_probe_threads_=2{noformat}
> So, there must be a codepath that skips going through the cyclic barrier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to