1. Does the compiler take the amount of records in consideration when compiling the query?
Yes. The optimizer has statistical information about the size of the various tables, and about their keying structures, etc.
2. Am I right to assume the cause of the delay is in the excessive amount of joins, that is causing the compiler (and optimizer) to have a hard time figuring out the best query plan?
Possibly. There are extreme combinatorics here, for sure; some of the search space exploration is factorial in complexity. I had some similar problems, and opened a job with my findings: https://issues.apache.org/jira/browse/DERBY-2130 In the details of that job are references to other related jobs.
3. Is there any parameter that controls the timeout for that optimization? Since the view does not actually have many records to begin with, I think it's acceptable to go with a non-optimal query plan.
As you'll see in my notes on DERBY-2130, I followed a similar line of reasoning, and found that by forcing an early termination of the search, I got a reasonable query plan.
4. Is there any other information I can provide to help debug? There are no stack traces in the log, but I'm not seeing that much info on it anyway: I see a line where compilation begins and the next one, two minutes later, is when the compilation ends. I have also attached my derby.properties below [4].
This is hard stuff. I'm not sure what to suggest. Although Derby has powerful algorithms in this area, there are a few weak spots, and it's possible you've hit one of them. Since it's open source, we as a community can all work together to try to improve Derby's implementation in this area. thanks, bryan