[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2019-01-10 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r246920271
 
 

 ##
 File path: 
sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
 ##
 @@ -0,0 +1,632 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.SettableFuture;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.adapter.java.JavaTypeFactory;
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.QueryProvider;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.client.DirectDruidClient;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.jackson.DefaultObjectMapper;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.io.Closer;
+import org.apache.druid.java.util.http.client.HttpClient;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.FullResponseHolder;
+import org.apache.druid.java.util.http.client.response.HttpResponseHandler;
+import org.apache.druid.query.QueryRunnerFactoryConglomerate;
+import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.ReflectionQueryToolChestWarehouse;
+import org.apache.druid.query.aggregation.CountAggregatorFactory;
+import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
+import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import 
org.apache.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import org.apache.druid.segment.IndexBuilder;
+import org.apache.druid.segment.QueryableIndex;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.incremental.IncrementalIndexSchema;
+import 
org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
+import org.apache.druid.server.coordination.DruidServerMetadata;
+import org.apache.druid.server.coordination.ServerType;
+import org.apache.druid.server.coordinator.BytesAccumulatingResponseHandler;
+import org.apache.druid.server.metrics.NoopServiceEmitter;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Authorizer;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.NoopEscalator;
+import org.apache.druid.sql.calcite.planner.PlannerConfig;
+import org.apache.druid.sql.calcite.util.CalciteTestBase;
+import org.apache.druid.sql.calcite.util.CalciteTests;
+import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
+import org.apache.druid.sql.calcite.util.TestServerInventoryView;
+import org.apache.druid.sql.calcite.view.NoopViewManager;
+import org.apache.druid.timeline.DataSegment;
+import org.easymock.EasyMock;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-12 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r224949840
 
 

 ##
 File path: 
sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
 ##
 @@ -0,0 +1,632 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.SettableFuture;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.adapter.java.JavaTypeFactory;
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.QueryProvider;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.client.DirectDruidClient;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.jackson.DefaultObjectMapper;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.io.Closer;
+import org.apache.druid.java.util.http.client.HttpClient;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.FullResponseHolder;
+import org.apache.druid.java.util.http.client.response.HttpResponseHandler;
+import org.apache.druid.query.QueryRunnerFactoryConglomerate;
+import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.ReflectionQueryToolChestWarehouse;
+import org.apache.druid.query.aggregation.CountAggregatorFactory;
+import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
+import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import 
org.apache.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import org.apache.druid.segment.IndexBuilder;
+import org.apache.druid.segment.QueryableIndex;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.incremental.IncrementalIndexSchema;
+import 
org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
+import org.apache.druid.server.coordination.DruidServerMetadata;
+import org.apache.druid.server.coordination.ServerType;
+import org.apache.druid.server.coordinator.BytesAccumulatingResponseHandler;
+import org.apache.druid.server.metrics.NoopServiceEmitter;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Authorizer;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.NoopEscalator;
+import org.apache.druid.sql.calcite.planner.PlannerConfig;
+import org.apache.druid.sql.calcite.util.CalciteTestBase;
+import org.apache.druid.sql.calcite.util.CalciteTests;
+import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
+import org.apache.druid.sql.calcite.util.TestServerInventoryView;
+import org.apache.druid.sql.calcite.view.NoopViewManager;
+import org.apache.druid.timeline.DataSegment;
+import org.easymock.EasyMock;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-12 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r224943056
 
 

 ##
 File path: 
sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
 ##
 @@ -0,0 +1,632 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.SettableFuture;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.adapter.java.JavaTypeFactory;
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.QueryProvider;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.client.DirectDruidClient;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.jackson.DefaultObjectMapper;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.io.Closer;
+import org.apache.druid.java.util.http.client.HttpClient;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.FullResponseHolder;
+import org.apache.druid.java.util.http.client.response.HttpResponseHandler;
+import org.apache.druid.query.QueryRunnerFactoryConglomerate;
+import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.ReflectionQueryToolChestWarehouse;
+import org.apache.druid.query.aggregation.CountAggregatorFactory;
+import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
+import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import 
org.apache.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import org.apache.druid.segment.IndexBuilder;
+import org.apache.druid.segment.QueryableIndex;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.incremental.IncrementalIndexSchema;
+import 
org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
+import org.apache.druid.server.coordination.DruidServerMetadata;
+import org.apache.druid.server.coordination.ServerType;
+import org.apache.druid.server.coordinator.BytesAccumulatingResponseHandler;
+import org.apache.druid.server.metrics.NoopServiceEmitter;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Authorizer;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.NoopEscalator;
+import org.apache.druid.sql.calcite.planner.PlannerConfig;
+import org.apache.druid.sql.calcite.util.CalciteTestBase;
+import org.apache.druid.sql.calcite.util.CalciteTests;
+import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
+import org.apache.druid.sql.calcite.util.TestServerInventoryView;
+import org.apache.druid.sql.calcite.view.NoopViewManager;
+import org.apache.druid.timeline.DataSegment;
+import org.easymock.EasyMock;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-11 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r224638922
 
 

 ##
 File path: 
sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
 ##
 @@ -0,0 +1,632 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.SettableFuture;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.adapter.java.JavaTypeFactory;
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.QueryProvider;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.client.DirectDruidClient;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.jackson.DefaultObjectMapper;
+import org.apache.druid.java.util.common.Intervals;
+import org.apache.druid.java.util.common.Pair;
+import org.apache.druid.java.util.common.io.Closer;
+import org.apache.druid.java.util.http.client.HttpClient;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.FullResponseHolder;
+import org.apache.druid.java.util.http.client.response.HttpResponseHandler;
+import org.apache.druid.query.QueryRunnerFactoryConglomerate;
+import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.ReflectionQueryToolChestWarehouse;
+import org.apache.druid.query.aggregation.CountAggregatorFactory;
+import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
+import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import 
org.apache.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import org.apache.druid.segment.IndexBuilder;
+import org.apache.druid.segment.QueryableIndex;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.incremental.IncrementalIndexSchema;
+import 
org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
+import org.apache.druid.server.coordination.DruidServerMetadata;
+import org.apache.druid.server.coordination.ServerType;
+import org.apache.druid.server.coordinator.BytesAccumulatingResponseHandler;
+import org.apache.druid.server.metrics.NoopServiceEmitter;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Authorizer;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.NoopEscalator;
+import org.apache.druid.sql.calcite.planner.PlannerConfig;
+import org.apache.druid.sql.calcite.util.CalciteTestBase;
+import org.apache.druid.sql.calcite.util.CalciteTests;
+import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
+import org.apache.druid.sql.calcite.util.TestServerInventoryView;
+import org.apache.druid.sql.calcite.view.NoopViewManager;
+import org.apache.druid.timeline.DataSegment;
+import org.easymock.EasyMock;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-09 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223806122
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -320,25 +322,32 @@ public void awaitInitialization() throws 
InterruptedException
   private void addSegment(final DruidServerMetadata server, final DataSegment 
segment)
   {
 synchronized (lock) {
-  final Map knownSegments = 
segmentSignatures.get(segment.getDataSource());
+  final Map knownSegments = 
segmentMetadataInfo.get(segment.getDataSource());
   if (knownSegments == null || !knownSegments.containsKey(segment)) {
+final long isRealtime = server.segmentReplicatable() ? 0 : 1;
 
 Review comment:
   ok, added the comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-09 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223810018
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,655 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.http.client.Request;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.coordinator.BytesAccumulatingResponseHandler;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Action;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.AuthorizationUtils;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.ForbiddenException;
+import org.apache.druid.server.security.Resource;
+import org.apache.druid.server.security.ResourceAction;
+import org.apache.druid.server.security.ResourceType;
+import org.apache.druid.sql.calcite.planner.PlannerContext;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SERVER_SEGMENTS_TABLE = "server_segments";
+  private static final String TASKS_TABLE = "tasks";
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-09 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223803923
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SegmentMetadataHolder.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import org.apache.druid.sql.calcite.table.RowSignature;
+
+import javax.annotation.Nullable;
+
+public class SegmentMetadataHolder
+{
+  private final Object lock = new Object();
+  private RowSignature rowSignature;
+  private final long isPublished;
+  private final long isAvailable;
+  private final long isRealtime;
 
 Review comment:
   sure, added this in comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-09 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223804348
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -528,6 +533,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The "sys" schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version string (generally an ISO8601 timestamp corresponding to when 
the segment set was first started). Higher version means the more recently 
created segment. Version comparing is based on string comparison.|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any 
server(historical or realtime)|
+|is_realtime|True if this segment is being served on any type of realtime 
tasks|
 
 Review comment:
   good point, changed the doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223528986
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
 ##
 @@ -309,19 +314,88 @@ private PlannerResult planWithBindableConvention(
 } else {
   final BindableRel theRel = bindableRel;
   final DataContext dataContext = 
plannerContext.createDataContext((JavaTypeFactory) planner.getTypeFactory());
-  final Supplier> resultsSupplier = new 
Supplier>()
-  {
-@Override
-public Sequence get()
-{
-  final Enumerable enumerable = theRel.bind(dataContext);
-  return Sequences.simple(enumerable);
-}
-  };
+
+  final Supplier> resultsSupplier = () -> new 
BaseSequence<>(
+  new BaseSequence.IteratorMaker()
+  {
+@Override
+public CloseableEnumerableIterator make()
+{
+  final Enumerable enumerable = theRel.bind(dataContext);
+  final Enumerator enumerator = enumerable.enumerator();
+  return new CloseableEnumerableIterator(new Iterator()
 
 Review comment:
   okay, will use `Sequences.withBaggage()`, but it seems i still need to 
override the `make` from `BaseSequence`. Changed to use 
`Sequences.withBaggage()` with an `EnumeratorIterator`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223470836
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223467463
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -320,25 +322,32 @@ public void awaitInitialization() throws 
InterruptedException
   private void addSegment(final DruidServerMetadata server, final DataSegment 
segment)
   {
 synchronized (lock) {
-  final Map knownSegments = 
segmentSignatures.get(segment.getDataSource());
+  final Map knownSegments = 
segmentMetadataInfo.get(segment.getDataSource());
   if (knownSegments == null || !knownSegments.containsKey(segment)) {
+final long isRealtime = server.segmentReplicatable() ? 0 : 1;
 
 Review comment:
   I am not clear on where and what comment to add. We can rename the 
`segmentReplicatable()` method in `DruidServerMetadata` to `isRealtimeServer()` 
if that's what you mean ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223463211
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
 
 Review comment:
   okay, changed the doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223470735
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223463541
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|JSON-serialized datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|host|Hostname of the server|
+|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is 
disabled|
+|tls_port|TLS port of the server, or -1 if TLS is disabled|
+|server_type|Type of Druid service. Possible values include: historical, 
realtime and indexer_executor.|
 
 Review comment:
   ok, changed the doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223470002
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SegmentMetadataHolder.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import org.apache.druid.sql.calcite.table.RowSignature;
+
+import javax.annotation.Nullable;
+
+public class SegmentMetadataHolder
+{
+  private final Object lock = new Object();
+  private RowSignature rowSignature;
+  private final long isPublished;
+  private final long isAvailable;
+  private final long isRealtime;
 
 Review comment:
   For example, to count the number of published segments. See 
[here](https://github.com/apache/incubator-druid/pull/6094/commits/44c6337ac088b3eed723451b31c12a82fb5ee308#r209747621)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223461261
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -103,9 +104,10 @@
   // Protects access to segmentSignatures, mutableSegments, 
segmentsNeedingRefresh, lastRefresh, isServerViewInitialized
   private final Object lock = new Object();
 
-  // DataSource -> Segment -> RowSignature for that segment.
-  // Use TreeMap for segments so they are merged in deterministic order, from 
older to newer.
-  private final Map> 
segmentSignatures = new HashMap<>();
+  // DataSource -> Segment -> SegmentMetadataHolder(contains RowSignature) for 
that segment.
 
 Review comment:
   Made `SegmentMetadataHolder` immutable, and it seems `segmentMetadataInfo` 
can be a HashMap now protected by `lock`. Yes `getSegmentMetadata()` is also 
returning `Map` created under the `lock`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223460333
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,695 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Action;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.AuthorizationUtils;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.ForbiddenException;
+import org.apache.druid.server.security.Resource;
+import org.apache.druid.server.security.ResourceAction;
+import org.apache.druid.server.security.ResourceType;
+import org.apache.druid.sql.calcite.planner.PlannerContext;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SERVER_SEGMENTS_TABLE = "server_segments";
+  private static final String TASKS_TABLE = "tasks";
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223265819
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,695 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Action;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.AuthorizationUtils;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.ForbiddenException;
+import org.apache.druid.server.security.Resource;
+import org.apache.druid.server.security.ResourceAction;
+import org.apache.druid.server.security.ResourceType;
+import org.apache.druid.sql.calcite.planner.PlannerContext;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SERVER_SEGMENTS_TABLE = "server_segments";
+  private static final String TASKS_TABLE = "tasks";
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223265809
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,695 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.Access;
+import org.apache.druid.server.security.Action;
+import org.apache.druid.server.security.AuthenticationResult;
+import org.apache.druid.server.security.AuthorizationUtils;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.server.security.ForbiddenException;
+import org.apache.druid.server.security.Resource;
+import org.apache.druid.server.security.ResourceAction;
+import org.apache.druid.server.security.ResourceType;
+import org.apache.druid.sql.calcite.planner.PlannerContext;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SERVER_SEGMENTS_TABLE = "server_segments";
+  private static final String TASKS_TABLE = "tasks";
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223265869
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -539,7 +553,18 @@ private static RowSignature analysisToRowSignature(final 
SegmentAnalysis analysi
 
   rowSignatureBuilder.add(entry.getKey(), valueType);
 }
-
 return rowSignatureBuilder.build();
   }
+
+  // note this is a mutable map accessed only by SystemSchema
 
 Review comment:
   this comment is obsolete now, this method returned `segmentMetadataInfo` 
before which is changing in this class. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223265780
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -528,6 +533,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The "sys" schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version string (generally an ISO8601 timestamp corresponding to when 
the segment set was first started). Higher version means the more recently 
created segment. Version comparing is based on string comparison.|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
 
 Review comment:
   This means `is_available` is true for a segment, if it's being served by any 
server( either historical or realtime). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-08 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r223265744
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
 ##
 @@ -57,36 +61,52 @@
   .build();
 
   private final DruidSchema druidSchema;
+  private final TimelineServerView serverView;
   private final QueryLifecycleFactory queryLifecycleFactory;
   private final DruidOperatorTable operatorTable;
   private final ExprMacroTable macroTable;
   private final PlannerConfig plannerConfig;
   private final ObjectMapper jsonMapper;
   private final AuthorizerMapper authorizerMapper;
+  private final DruidLeaderClient coordinatorDruidLeaderClient;
+  private final DruidLeaderClient overlordDruidLeaderClient;
 
   @Inject
   public PlannerFactory(
   final DruidSchema druidSchema,
+  final TimelineServerView serverView,
   final QueryLifecycleFactory queryLifecycleFactory,
   final DruidOperatorTable operatorTable,
   final ExprMacroTable macroTable,
   final PlannerConfig plannerConfig,
   final AuthorizerMapper authorizerMapper,
-  final @Json ObjectMapper jsonMapper
+  final @Json ObjectMapper jsonMapper,
+  final @Coordinator DruidLeaderClient coordinatorDruidLeaderClient,
+  final @IndexingService DruidLeaderClient overlordDruidLeaderClient
 
 Review comment:
   That makes sense, will pass in `SystemSchema` in `PlannerFactory`. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-04 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222142359
 
 

 ##
 File path: 
benchmarks/src/main/java/org/apache/druid/benchmark/query/SqlBenchmark.java
 ##
 @@ -111,16 +114,20 @@ public void setup()
 .createQueryRunnerFactoryConglomerate();
 final QueryRunnerFactoryConglomerate conglomerate = 
conglomerateCloserPair.lhs;
 final PlannerConfig plannerConfig = new PlannerConfig();
+final DruidLeaderClient druidLeaderClient = 
EasyMock.createMock(DruidLeaderClient.class);
 
 Review comment:
   ~~created an `LeaderClient` interface and added `NoopDruidLeaderClient` 
class~~
   using empty anonymous subclass instead


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-02 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222144680
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-02 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222142396
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-02 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222142458
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-02 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222142359
 
 

 ##
 File path: 
benchmarks/src/main/java/org/apache/druid/benchmark/query/SqlBenchmark.java
 ##
 @@ -111,16 +114,20 @@ public void setup()
 .createQueryRunnerFactoryConglomerate();
 final QueryRunnerFactoryConglomerate conglomerate = 
conglomerateCloserPair.lhs;
 final PlannerConfig plannerConfig = new PlannerConfig();
+final DruidLeaderClient druidLeaderClient = 
EasyMock.createMock(DruidLeaderClient.class);
 
 Review comment:
   created an `LeaderClient` interface and added `NoopDruidLeaderClient` class


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-02 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r222129216
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -412,21 +418,16 @@ public TableType getJdbcTableType()
   final List druidServers = 
serverView.getDruidServers();
   final AuthenticationResult authenticationResult =
   (AuthenticationResult) 
root.get(PlannerContext.DATA_CTX_AUTHENTICATION_RESULT);
+  final Access access = AuthorizationUtils.authorizeAllResourceActions(
+  authenticationResult,
+  Collections.singletonList(new ResourceAction(new Resource("STATE", 
ResourceType.STATE), Action.READ)),
+  authorizerMapper
+  );
+  if (!access.isAllowed()) {
+return Linq4j.asEnumerable(ImmutableList.of());
 
 Review comment:
   okay will throw exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-01 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221765858
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SegmentMetadataHolder.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import org.apache.druid.sql.calcite.table.RowSignature;
+
+import javax.annotation.Nullable;
+
+public class SegmentMetadataHolder
+{
+  private final Object lock = new Object();
+  private RowSignature rowSignature;
+  private final long isPublished;
+  private final long isAvailable;
+  private final long isRealtime;
+  private long numReplicas;
+  @Nullable
+  private Long numRows;
+
+
+  public SegmentMetadataHolder(
+  @Nullable RowSignature rowSignature,
+  long isPublished,
+  long isAvailable,
+  long isRealtime,
+  long numReplicas,
+  @Nullable Long numRows
+  )
+  {
+this.rowSignature = rowSignature;
+this.isPublished = isPublished;
+this.isAvailable = isAvailable;
+this.isRealtime = isRealtime;
+this.numReplicas = numReplicas;
+this.numRows = numRows;
+  }
+
+
+  public long isPublished()
+  {
+synchronized (lock) {
+  return isPublished;
+}
+  }
+
+  public long isAvailable()
+  {
+synchronized (lock) {
+  return isAvailable;
+}
+  }
+
+  public long isRealtime()
+  {
+synchronized (lock) {
+  return isRealtime;
+}
+  }
+
+  public long getNumReplicas()
+  {
+synchronized (lock) {
+  return numReplicas;
+}
+  }
+
+  @Nullable
+  public Long getNumRows()
+  {
+synchronized (lock) {
+  return numRows;
+}
+  }
+
+  @Nullable
+  public RowSignature getRowSignature()
+  {
+synchronized (lock) {
+  return rowSignature;
+}
+  }
+
+  public void setRowSignature(RowSignature rowSignature)
 
 Review comment:
   Do you mean to remove the setters and always create a new 
`SegmentMetadataHolder` object when any of the member needs update. And replace 
the entry in `segmentMetadataInfo` map in `DruidSchema` every time update is 
required. Is that better then using the `lock`. I wonder if a `ReadWriteLock` 
is better here ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-01 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221765823
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-10-01 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221765584
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496506
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496546
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496474
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496485
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496495
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
 
 Review comment:
   yes, i kept it like that looking at `InformationSchema`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496420
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -320,25 +322,32 @@ public void awaitInitialization() throws 
InterruptedException
   private void addSegment(final DruidServerMetadata server, final DataSegment 
segment)
   {
 synchronized (lock) {
-  final Map knownSegments = 
segmentSignatures.get(segment.getDataSource());
+  final Map knownSegments = 
segmentMetadataInfo.get(segment.getDataSource());
   if (knownSegments == null || !knownSegments.containsKey(segment)) {
+final long isRealtime = server.segmentReplicatable() ? 0 : 1;
+final long isPublished = 
server.getType().toString().equals(ServerType.HISTORICAL.toString()) ? 1 : 0;
+SegmentMetadataHolder holder = new SegmentMetadataHolder(null, 
isPublished, 1, isRealtime, 1, null);
 // Unknown segment.
-setSegmentSignature(segment, null);
+setSegmentSignature(segment, holder);
 segmentsNeedingRefresh.add(segment);
-
 if (!server.segmentReplicatable()) {
   log.debug("Added new mutable segment[%s].", segment.getIdentifier());
   mutableSegments.add(segment);
 } else {
   log.debug("Added new immutable segment[%s].", 
segment.getIdentifier());
 }
-  } else if (server.segmentReplicatable()) {
-// If a segment shows up on a replicatable (historical) server at any 
point, then it must be immutable,
-// even if it's also available on non-replicatable (realtime) servers.
-mutableSegments.remove(segment);
-log.debug("Segment[%s] has become immutable.", 
segment.getIdentifier());
+  } else {
+if (knownSegments != null && knownSegments.containsKey(segment)) {
 
 Review comment:
   removed the null check on `knownSegments`
   
   > and this can cause the race condition if we remove `lock` of this class
   
   But I am not removing the `lock` here, so don't understand where is the race 
condition then ?
   
   > Why not use `computeIfAbsent()`?
   
   It seems `computeIfAbsent()` also calls `doGet()`. Why do you think, this 
will not cause race condition if we remove the `lock`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496460
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -517,10 +532,9 @@ private DruidTable buildDruidTable(final String 
dataSource)
 return queryLifecycleFactory.factorize().runSimple(segmentMetadataQuery, 
authenticationResult, null);
   }
 
-  private static RowSignature analysisToRowSignature(final SegmentAnalysis 
analysis)
+  private static Pair analysisToRowSignature(final 
SegmentAnalysis analysis)
 
 Review comment:
   yeah, that seems better, reverted the change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496386
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SegmentMetadataHolder.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import org.apache.druid.sql.calcite.table.RowSignature;
+
+import javax.annotation.Nullable;
+
+public class SegmentMetadataHolder
+{
+  private final Object lock = new Object();
+  private RowSignature rowSignature;
+  private final long isPublished;
+  private final long isAvailable;
+  private final long isRealtime;
 
 Review comment:
   These were changed from boolean to long so that aggregation is easier and 
faster at query time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496399
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -320,25 +322,32 @@ public void awaitInitialization() throws 
InterruptedException
   private void addSegment(final DruidServerMetadata server, final DataSegment 
segment)
   {
 synchronized (lock) {
-  final Map knownSegments = 
segmentSignatures.get(segment.getDataSource());
+  final Map knownSegments = 
segmentMetadataInfo.get(segment.getDataSource());
   if (knownSegments == null || !knownSegments.containsKey(segment)) {
+final long isRealtime = server.segmentReplicatable() ? 0 : 1;
+final long isPublished = 
server.getType().toString().equals(ServerType.HISTORICAL.toString()) ? 1 : 0;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496420
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -320,25 +322,32 @@ public void awaitInitialization() throws 
InterruptedException
   private void addSegment(final DruidServerMetadata server, final DataSegment 
segment)
   {
 synchronized (lock) {
-  final Map knownSegments = 
segmentSignatures.get(segment.getDataSource());
+  final Map knownSegments = 
segmentMetadataInfo.get(segment.getDataSource());
   if (knownSegments == null || !knownSegments.containsKey(segment)) {
+final long isRealtime = server.segmentReplicatable() ? 0 : 1;
+final long isPublished = 
server.getType().toString().equals(ServerType.HISTORICAL.toString()) ? 1 : 0;
+SegmentMetadataHolder holder = new SegmentMetadataHolder(null, 
isPublished, 1, isRealtime, 1, null);
 // Unknown segment.
-setSegmentSignature(segment, null);
+setSegmentSignature(segment, holder);
 segmentsNeedingRefresh.add(segment);
-
 if (!server.segmentReplicatable()) {
   log.debug("Added new mutable segment[%s].", segment.getIdentifier());
   mutableSegments.add(segment);
 } else {
   log.debug("Added new immutable segment[%s].", 
segment.getIdentifier());
 }
-  } else if (server.segmentReplicatable()) {
-// If a segment shows up on a replicatable (historical) server at any 
point, then it must be immutable,
-// even if it's also available on non-replicatable (realtime) servers.
-mutableSegments.remove(segment);
-log.debug("Segment[%s] has become immutable.", 
segment.getIdentifier());
+  } else {
+if (knownSegments != null && knownSegments.containsKey(segment)) {
 
 Review comment:
   removed the null check on `knownSegments`
   
   > and this can cause the race condition if we remove `lock` of this class
   But I am not removing the `lock` here, so don't understand where is the race 
condition then ?
   
   > Why not use `computeIfAbsent()`?
   It seems `computeIfAbsent()` also calls `doGet()`. Why do you think, this 
will not cause race condition if we remove the `lock`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496402
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
 ##
 @@ -455,22 +469,23 @@ private void removeSegment(final DataSegment segment)
 return retVal;
   }
 
-  private void setSegmentSignature(final DataSegment segment, final 
RowSignature rowSignature)
+  private void setSegmentSignature(final DataSegment segment, final 
SegmentMetadataHolder segmentMetadataHolder)
   {
 synchronized (lock) {
-  segmentSignatures.computeIfAbsent(segment.getDataSource(), x -> new 
TreeMap<>(SEGMENT_ORDER))
-   .put(segment, rowSignature);
+  segmentMetadataInfo.computeIfAbsent(segment.getDataSource(), x -> new 
ConcurrentSkipListMap<>(SEGMENT_ORDER))
+   .put(segment, segmentMetadataHolder);
 
 Review comment:
   fixed indentation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496364
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SegmentMetadataHolder.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import org.apache.druid.sql.calcite.table.RowSignature;
+
+import javax.annotation.Nullable;
+
+public class SegmentMetadataHolder
 
 Review comment:
   added javadocs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496345
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/discovery/DruidLeaderClient.java
 ##
 @@ -133,6 +134,14 @@ public FullResponseHolder go(Request request) throws 
IOException, InterruptedExc
 return go(request, new FullResponseHandler(StandardCharsets.UTF_8));
   }
 
+  public  ListenableFuture goStream(
 
 Review comment:
   Renamed to `goAsync` and added javadoc. I added it because the existing 
`go()` method return a `FullResponseHolder`, but the response handler needs to 
be passed to `JsonParserIterator` which expects a `Future`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496313
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|JSON-serialized datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|host|Hostname of the server|
+|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is 
disabled|
+|tls_port|TLS port of the server, or -1 if TLS is disabled|
+|server_type|Type of Druid service. Possible values include: historical, 
realtime and indexer_executor.|
+|tier|Distribution tier see 
[druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
+|current_size|Current size of segments in bytes on this server|
+|max_size|Max size in bytes this server recommends to assign to segments see 
[druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
+
+To retrieve information about all servers, use the query:
+```sql
+SELECT * FROM sys.servers;
+```
+
+### SEGMENT_SERVERS table
+
+SEGMENT_SERVERS is used to join segments with servers table
+
+|Column|Notes|
+|--|-|
+|server|Server name in format host:port (Primary key of [servers 
table](#SERVERS-table))|
+|segment_id|Segment identifier (Primary key of [segments 
table](#SEGMENTS-table))|
+
+JOIN between "segments" and "servers" can be used to query the number of 
segments for a specific datasource, 
+grouped by server, example query:
+```sql
+SELECT count(segments.segment_id) as num_segments from sys.segments as 
segments 
+INNER JOIN sys.segment_servers as segment_servers 
+ON segments.segment_id  = segment_servers.segment_id 
+INNER JOIN sys.servers as servers 
+ON servers.server = segment_servers.server
+WHERE segments.datasource = 'wikipedia' 
+GROUP BY servers.server;
+```
+
+### TASKS table
+
+The tasks table provides information about active and recently-completed 
indexing tasks. For more information 
+check out [ingestion tasks](#../ingestion/tasks.html)
+
+|Column|Notes|
+|--|-|
+|task_id|Unique task identifier|
+|type|Task type, this should be "index" for indexing tasks|
+|datasource|Datasource name being indexed|
+|created_time|Timestamp in ISO8601 format corresponding to when the ingestion 
task was created. Note that this value is populated for completed and waiting 
tasks. For running and pending tasks this value is set to DateTimes.EPOCH|
 
 Review comment:
   ok done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496308
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|JSON-serialized datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|host|Hostname of the server|
+|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is 
disabled|
+|tls_port|TLS port of the server, or -1 if TLS is disabled|
+|server_type|Type of Druid service. Possible values include: historical, 
realtime and indexer_executor.|
+|tier|Distribution tier see 
[druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
+|current_size|Current size of segments in bytes on this server|
+|max_size|Max size in bytes this server recommends to assign to segments see 
[druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
+
+To retrieve information about all servers, use the query:
+```sql
+SELECT * FROM sys.servers;
+```
+
+### SEGMENT_SERVERS table
+
+SEGMENT_SERVERS is used to join segments with servers table
+
+|Column|Notes|
+|--|-|
+|server|Server name in format host:port (Primary key of [servers 
table](#SERVERS-table))|
+|segment_id|Segment identifier (Primary key of [segments 
table](#SEGMENTS-table))|
+
+JOIN between "segments" and "servers" can be used to query the number of 
segments for a specific datasource, 
+grouped by server, example query:
+```sql
+SELECT count(segments.segment_id) as num_segments from sys.segments as 
segments 
+INNER JOIN sys.segment_servers as segment_servers 
+ON segments.segment_id  = segment_servers.segment_id 
+INNER JOIN sys.servers as servers 
+ON servers.server = segment_servers.server
+WHERE segments.datasource = 'wikipedia' 
+GROUP BY servers.server;
+```
+
+### TASKS table
+
+The tasks table provides information about active and recently-completed 
indexing tasks. For more information 
+check out [ingestion tasks](#../ingestion/tasks.html)
+
+|Column|Notes|
+|--|-|
+|task_id|Unique task identifier|
+|type|Task type, this should be "index" for indexing tasks|
 
 Review comment:
   modified it


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496335
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,595 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496274
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
 
 Review comment:
   I think i copied it from some other documentation for `version`, so if 
version format changes in future, the documentation would need to change at all 
places. But the extra information you provided seems useful and will append 
that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496288
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|JSON-serialized datasegment payload|
 
 Review comment:
   changed to `data segment`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496280
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
 
 Review comment:
   I didn't understand your suggestion. Do you mean to provide some example of 
`realtime servers`?  Or like `True if this segment is being served by any 
realtime tasks` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496258
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -468,6 +468,11 @@ plan SQL queries. This metadata is cached on broker 
startup and also updated per
 [SegmentMetadata queries](segmentmetadataquery.html). Background metadata 
refreshing is triggered by
 segments entering and exiting the cluster, and can also be throttled through 
configuration.
 
+Druid exposes system information through special system tables. There are two 
such schemas available: Information Schema and System Schema.
+Information schema provides details about table and column types. Sys schema 
provides information about Druid internals like segments/tasks/servers.
 
 Review comment:
   So we'll keep it "sys"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496263
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
 
 Review comment:
   added the quotes around sys


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-30 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221496251
 
 

 ##
 File path: 
benchmarks/src/main/java/org/apache/druid/benchmark/query/SqlBenchmark.java
 ##
 @@ -111,16 +114,20 @@ public void setup()
 .createQueryRunnerFactoryConglomerate();
 final QueryRunnerFactoryConglomerate conglomerate = 
conglomerateCloserPair.lhs;
 final PlannerConfig plannerConfig = new PlannerConfig();
+final DruidLeaderClient druidLeaderClient = 
EasyMock.createMock(DruidLeaderClient.class);
 
 this.walker = new 
SpecificSegmentsQuerySegmentWalker(conglomerate).add(dataSegment, index);
 plannerFactory = new PlannerFactory(
 CalciteTests.createMockSchema(conglomerate, walker, plannerConfig),
+new TestServerInventoryView(walker.getSegments()),
 
 Review comment:
   i didn't look carefully, it's in 
`/druid/sql/src/test/java/org/apache/druid/sql/calcite/util`, I think we can 
keep it there, as it's mostly being used by tests. Will use a mock in this 
benchmark instead of `TestServerInventoryView`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221392310
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,101 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
 
 Review comment:
   Actually, I am a bit confused, and not sure if `sys` would sound better at 
all places. @gianm do you have an opinion? Should we make is consistent and 
call it `sys` instead of `system` at all places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221391895
 
 

 ##
 File path: 
benchmarks/src/main/java/org/apache/druid/benchmark/query/SqlBenchmark.java
 ##
 @@ -111,16 +114,20 @@ public void setup()
 .createQueryRunnerFactoryConglomerate();
 final QueryRunnerFactoryConglomerate conglomerate = 
conglomerateCloserPair.lhs;
 final PlannerConfig plannerConfig = new PlannerConfig();
+final DruidLeaderClient druidLeaderClient = 
EasyMock.createMock(DruidLeaderClient.class);
 
 this.walker = new 
SpecificSegmentsQuerySegmentWalker(conglomerate).add(dataSegment, index);
 plannerFactory = new PlannerFactory(
 CalciteTests.createMockSchema(conglomerate, walker, plannerConfig),
+new TestServerInventoryView(walker.getSegments()),
 
 Review comment:
   hm `TestServerInventoryView` is already in 
`/druid/sql/src/main/java/org/apache/druid/sql/calcite/util`. And not sure if I 
should rename it in this PR, as it was already existing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221372115
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/server/http/MetadataResource.java
 ##
 @@ -136,6 +137,23 @@ public Response getDatabaseSegmentDataSource(
 return Response.status(Response.Status.OK).entity(dataSource).build();
   }
 
+  @GET
+  @Path("/segments")
+  @Produces(MediaType.APPLICATION_JSON)
+  @ResourceFilters(DatasourceResourceFilter.class)
+  public Response getDatabaseSegments()
+  {
+final Collection druidDataSources = 
metadataSegmentManager.getInventory();
+final Set metadataSegments = druidDataSources
+.stream()
+.flatMap(t -> t.getSegments().stream())
+.collect(Collectors.toSet());
+
+Response.ResponseBuilder builder = Response.status(Response.Status.OK);
+return builder.entity(metadataSegments).build();
 
 Review comment:
   cool, didn't know about `StreamingOutput`, using it now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221371837
 
 

 ##
 File path: server/src/main/java/org/apache/druid/client/TimelineServerView.java
 ##
 @@ -36,6 +38,8 @@
   @Nullable
   TimelineLookup getTimeline(DataSource dataSource);
 
+  Map getClients();
 
 Review comment:
   renamed the method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221371456
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,537 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.TimelineServerView;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+import org.joda.time.DateTime;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("server", ValueType.STRING)
+  .add("scheme", ValueType.STRING)
+  .add("server_type", ValueType.STRING)
+  .add("tier", ValueType.STRING)
+  .add("curr_size", ValueType.LONG)
+  .add("max_size", ValueType.LONG)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("server", ValueType.STRING)
+  .add("segment_id", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("task_id", ValueType.STRING)
+  .add("type", ValueType.STRING)
+  .add("datasource", 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-28 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r221369753
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
 
 Review comment:
   Just tried concat, it does not support that call `cannot translate call 
CONCAT($t1, $t2)`. May be there is a way to support concat in future. But for 
now users can just do `select server from sys.servers;` :). Also `server` is 
the primary key of this table and is used in the join table `segment_servers`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220033043
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012235
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,89 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The sys schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number of replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|scheme|Server scheme http or https|
+|server_type|Type of Druid service. Possible values include: historical, 
realtime, bridge and indexer_executor.|
 
 Review comment:
   okay, removed `bridge`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012215
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012202
 
 

 ##
 File path: server/src/main/java/org/apache/druid/client/TimelineServerView.java
 ##
 @@ -36,6 +38,9 @@
   @Nullable
   TimelineLookup getTimeline(DataSource dataSource);
 
+  @Nullable
 
 Review comment:
   Right, they throw UnsupportedOperationException instead, removed the 
nullable annotation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012194
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012156
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012150
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,89 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
+|num_rows|Number of rows in current segment, this value could be null if 
unkown to broker at query time|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-24 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r220012167
 
 

 ##
 File path: 
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.JavaType;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.inject.Inject;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.DefaultEnumerable;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
+import org.apache.druid.client.JsonParserIterator;
+import org.apache.druid.client.TimelineServerView;
+import org.apache.druid.client.coordinator.Coordinator;
+import org.apache.druid.client.indexing.IndexingService;
+import org.apache.druid.client.selector.QueryableDruidServer;
+import org.apache.druid.discovery.DruidLeaderClient;
+import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.http.client.Request;
+import 
org.apache.druid.java.util.http.client.io.AppendableByteArrayInputStream;
+import org.apache.druid.java.util.http.client.response.ClientResponse;
+import 
org.apache.druid.java.util.http.client.response.InputStreamResponseHandler;
+import org.apache.druid.segment.column.ValueType;
+import org.apache.druid.server.security.AuthorizerMapper;
+import org.apache.druid.sql.calcite.table.RowSignature;
+import org.apache.druid.timeline.DataSegment;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponse;
+
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ExecutionException;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "sys";
+  private static final String SEGMENTS_TABLE = "segments";
+  private static final String SERVERS_TABLE = "servers";
+  private static final String SEGMENT_SERVERS_TABLE = "segment_servers";
+  private static final String TASKS_TABLE = "tasks";
+  private static final int SEGMENT_SERVERS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("segment_id", ValueType.STRING)
+  .add("datasource", ValueType.STRING)
+  .add("start", ValueType.STRING)
+  .add("end", ValueType.STRING)
+  .add("size", ValueType.LONG)
+  .add("version", ValueType.STRING)
+  .add("partition_num", ValueType.STRING)
+  .add("num_replicas", ValueType.LONG)
+  .add("num_rows", ValueType.LONG)
+  .add("is_published", ValueType.LONG)
+  .add("is_available", ValueType.LONG)
+  .add("is_realtime", ValueType.LONG)
+  .add("payload", ValueType.STRING)
+  .build();
+
+  private static final 

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657793
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|scheme|Server scheme http or https|
+|server_type|Type of druid service for example historical, realtime, bridge, 
indexer_executor|
 
 Review comment:
   changed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657781
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/discovery/DruidLeaderClient.java
 ##
 @@ -133,6 +134,17 @@ public FullResponseHolder go(Request request) throws 
IOException, InterruptedExc
 return go(request, new FullResponseHandler(StandardCharsets.UTF_8));
   }
 
+  public  ListenableFuture goStream(
+  final Request request,
+  final HttpResponseHandler handler
+  )
+  {
+return httpClient.go(
 
 Review comment:
   fixed the format


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657795
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657794
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
 
 Review comment:
   sure, added another column for `server_host`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657787
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -519,6 +524,89 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657778
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/server/http/MetadataResource.java
 ##
 @@ -136,6 +137,23 @@ public Response getDatabaseSegmentDataSource(
 return Response.status(Response.Status.OK).entity(dataSource).build();
   }
 
+  @GET
+  @Path("/segments")
+  @Produces(MediaType.APPLICATION_JSON)
+  @ResourceFilters(DatasourceResourceFilter.class)
+  public Response getDatabaseSegmentSegments()
 
 Review comment:
   yes, that looks better


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657790
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +486,88 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+The SYS schema provides visibility into Druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+SELECT * FROM sys.segments WHERE datasource = 'wikipedia'
+```
+
+### SEGMENTS table
+Segments table provides details on all Druid segments, whether they are 
published yet or not.
+
+
+|Column|Notes|
+|--|-|
+|segment_id|Unique segment identifier|
+|datasource|Name of datasource|
+|start|Interval start time (in ISO 8601 format)|
+|end|Interval end time (in ISO 8601 format)|
+|size|Size of segment in bytes|
+|version|Version number (generally an ISO8601 timestamp corresponding to when 
the segment set was first started)|
+|partition_num|Partition number (an integer, unique within a 
datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|Number replicas of this segment currently being served|
+|is_published|True if this segment has been published to the metadata store|
+|is_available|True if this segment is currently being served by any server|
+|is_realtime|True if this segment is being served on a realtime server|
+|payload|Jsonified datasegment payload|
+
+### SERVERS table
+Servers table lists all data servers(any server that hosts a segment). It 
includes both historicals and peons.
+
+|Column|Notes|
+|--|-|
+|server|Server name in the form host:port|
+|scheme|Server scheme http or https|
+|server_type|Type of druid service for example historical, realtime, bridge, 
indexer_executor|
+|tier|Distribution tier see 
[druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
+|current_size|Current size of segments in bytes on this server|
+|max_size|Max size in bytes this server recommends to assign to segments see 
[druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
+
+To retrieve information about all servers, use the query:
+```sql
+SELECT * FROM sys.servers;
+```
+
+### SEGMENT_SERVERS table
+
+SEGMENT_SERVERS is used to join SEGMENTS with SERVERS table
+
+|Column|Notes|
+|--|-|
+|server|Server name in format host:port (Primary key of [servers 
table](#SERVERS-table))|
+|segment_id|Segment identifier (Primary key of [segments 
table](#SEGMENTS-table))|
+
+To retrieve information from segment_servers table, use the query:
+```sql
+SELECT * FROM sys.segment_servers;
+```
+
+### TASKS table
+
+The tasks table provides information about active and recently-completed 
indexing tasks. For more information 
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-09-21 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r219657789
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -468,6 +468,11 @@ plan SQL queries. This metadata is cached on broker 
startup and also updated per
 [SegmentMetadata queries](segmentmetadataquery.html). Background metadata 
refreshing is triggered by
 segments entering and exiting the cluster, and can also be throttled through 
configuration.
 
+Druid exposes system information through special system tables. There are two 
such schemas available: Information Schema and System Schema
 
 Review comment:
   added the missing period, let me know if you want to restructure it further.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843449
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843455
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843443
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843356
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843435
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843408
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843368
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843376
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843432
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843422
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843337
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843210
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843170
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -430,6 +430,10 @@ plan SQL queries. This metadata is cached on broker 
startup and also updated per
 [SegmentMetadata queries](segmentmetadataquery.html). Background metadata 
refreshing is triggered by
 segments entering and exiting the cluster, and can also be throttled through 
configuration.
 
+Druid exposes system information through special system tables. There are two 
such schemas available : Information Schema and System Schema
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843334
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843327
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843183
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
+|IS_AVAILABLE|segment is being served|
+|IS_REALTIME|segment served on a realtime server|
+|PAYLOAD|jsonified datasegment payload|
+
+### SERVERS table
+
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SERVER_TYPE||
+|TIER||
+|CURRENT_SIZE||
+|MAX_SIZE||
+
+To retrieve all servers information, use the query
+```sql
+select * from SYS.SERVERS;
+```
+
+### SEGMENTSERVERS table
+
+SEGMENTSERVERS is used to join SEGMENTS with SERVERS table
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SEGMENT_ID||
+
+### TASKS table
+
+TASKS table provides tasks info from overlord.
+
+|Column|Notes|
+|--|-|
+|TASK_ID||
 
 Review comment:
   yes added


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843200
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843186
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/planner/Calcites.java
 ##
 @@ -98,11 +103,30 @@ public static Charset defaultCharset()
 return DEFAULT_CHARSET;
   }
 
-  public static SchemaPlus createRootSchema(final Schema druidSchema, final 
AuthorizerMapper authorizerMapper)
+  public static SchemaPlus createRootSchema(
+  final TimelineServerView serverView,
+  final Schema druidSchema,
+  final AuthorizerMapper authorizerMapper,
+  final DruidLeaderClient coordinatorDruidLeaderClient,
+  final DruidLeaderClient overlordDruidLeaderClient,
+  final ObjectMapper jsonMapper
+  )
   {
 final SchemaPlus rootSchema = CalciteSchema.createRootSchema(false, 
false).plus();
 rootSchema.add(DruidSchema.NAME, druidSchema);
 rootSchema.add(InformationSchema.NAME, new InformationSchema(rootSchema, 
authorizerMapper));
+if (serverView instanceof BrokerServerView) {
 
 Review comment:
it doesn't seem necessary, removed the cast.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843307
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843283
 
 

 ##
 File path: sql/src/main/java/io/druid/sql/calcite/schema/SystemSchema.java
 ##
 @@ -0,0 +1,435 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.sql.calcite.schema;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import io.druid.client.BrokerServerView;
+import io.druid.client.DruidServer;
+import io.druid.client.ImmutableDruidDataSource;
+import io.druid.client.coordinator.Coordinator;
+import io.druid.client.indexing.IndexingService;
+import io.druid.client.selector.QueryableDruidServer;
+import io.druid.discovery.DruidLeaderClient;
+import io.druid.indexer.TaskStatusPlus;
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.StringUtils;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.http.client.response.FullResponseHolder;
+import io.druid.segment.column.ValueType;
+import io.druid.server.coordination.ServerType;
+import io.druid.server.security.AuthorizerMapper;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.timeline.DataSegment;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.linq4j.Enumerable;
+import org.apache.calcite.linq4j.Linq4j;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.ScannableTable;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.jboss.netty.handler.codec.http.HttpMethod;
+import org.jboss.netty.handler.codec.http.HttpResponseStatus;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+public class SystemSchema extends AbstractSchema
+{
+  private static final Logger log = new Logger(SystemSchema.class);
+
+  public static final String NAME = "SYS";
+  private static final String SEGMENTS_TABLE = "SEGMENTS";
+  private static final String SERVERS_TABLE = "SERVERS";
+  private static final String SERVERSEGMENTS_TABLE = "SEGMENTSERVERS";
+  private static final String TASKS_TABLE = "TASKS";
+  private static final int SEGMENTS_TABLE_SIZE;
+  private static final int SERVERSEGMENTS_TABLE_SIZE;
+
+  private static final RowSignature SEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SEGMENT_ID", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("START", ValueType.STRING)
+  .add("END", ValueType.STRING)
+  .add("IS_PUBLISHED", ValueType.STRING)
+  .add("IS_AVAILABLE", ValueType.STRING)
+  .add("IS_REALTIME", ValueType.STRING)
+  .add("PAYLOAD", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SERVER_TYPE", ValueType.STRING)
+  .add("TIER", ValueType.STRING)
+  .add("CURR_SIZE", ValueType.STRING)
+  .add("MAX_SIZE", ValueType.STRING)
+  .build();
+
+  private static final RowSignature SERVERSEGMENTS_SIGNATURE = RowSignature
+  .builder()
+  .add("SERVER", ValueType.STRING)
+  .add("SEGMENT_ID", ValueType.STRING)
+  .build();
+
+  private static final RowSignature TASKS_SIGNATURE = RowSignature
+  .builder()
+  .add("TASK_ID", ValueType.STRING)
+  .add("TYPE", ValueType.STRING)
+  .add("DATASOURCE", ValueType.STRING)
+  .add("CREATED_TIME", ValueType.STRING)
+  .add("QUEUE_INSERTION_TIME", ValueType.STRING)
+  .add("STATUS", ValueType.STRING)
+  .add("RUNNER_STATUS", ValueType.STRING)
+  

[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843167
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
+|IS_AVAILABLE|segment is being served|
+|IS_REALTIME|segment served on a realtime server|
+|PAYLOAD|jsonified datasegment payload|
+
+### SERVERS table
+
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SERVER_TYPE||
+|TIER||
+|CURRENT_SIZE||
+|MAX_SIZE||
+
+To retrieve all servers information, use the query
+```sql
+select * from SYS.SERVERS;
+```
+
+### SEGMENTSERVERS table
+
+SEGMENTSERVERS is used to join SEGMENTS with SERVERS table
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SEGMENT_ID||
+
+### TASKS table
+
+TASKS table provides tasks info from overlord.
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843134
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
+|IS_AVAILABLE|segment is being served|
+|IS_REALTIME|segment served on a realtime server|
+|PAYLOAD|jsonified datasegment payload|
+
+### SERVERS table
+
+
+|Column|Notes|
+|--|-|
+|SERVER||
 
 Review comment:
   added description


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843156
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
+|IS_AVAILABLE|segment is being served|
+|IS_REALTIME|segment served on a realtime server|
+|PAYLOAD|jsonified datasegment payload|
+
+### SERVERS table
+
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SERVER_TYPE||
+|TIER||
+|CURRENT_SIZE||
+|MAX_SIZE||
+
+To retrieve all servers information, use the query
+```sql
+select * from SYS.SERVERS;
+```
+
+### SEGMENTSERVERS table
+
+SEGMENTSERVERS is used to join SEGMENTS with SERVERS table
+
+|Column|Notes|
+|--|-|
+|SERVER||
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843154
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
+|IS_AVAILABLE|segment is being served|
+|IS_REALTIME|segment served on a realtime server|
+|PAYLOAD|jsonified datasegment payload|
+
+### SERVERS table
+
+
+|Column|Notes|
+|--|-|
+|SERVER||
+|SERVER_TYPE||
+|TIER||
+|CURRENT_SIZE||
+|MAX_SIZE||
+
+To retrieve all servers information, use the query
+```sql
+select * from SYS.SERVERS;
+```
+
+### SEGMENTSERVERS table
+
+SEGMENTSERVERS is used to join SEGMENTS with SERVERS table
 
 Review comment:
   changed to `segment_servers` lowercase


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] surekhasaharan commented on a change in pull request #6094: Introduce SystemSchema tables (#5989)

2018-08-22 Thread GitBox
surekhasaharan commented on a change in pull request #6094: Introduce 
SystemSchema tables (#5989)
URL: https://github.com/apache/incubator-druid/pull/6094#discussion_r211843117
 
 

 ##
 File path: docs/content/querying/sql.md
 ##
 @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE 
TABLE_SCHEMA = 'druid' AND TABLE_
 |COLLATION_NAME||
 |JDBC_TYPE|Type code from java.sql.Types (Druid extension)|
 
+## SYSTEM SCHEMA
+
+SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
+For example to retrieve all segments for datasource "wikipedia", use the query:
+```sql
+select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
+```
+
+### SEGMENTS table
+Segments tables provides details on all the segments, both published and 
served(but not published).
+
+
+|Column|Notes|
+|--|-|
+|SEGMENT_ID||
+|DATASOURCE||
+|START||
+|END||
+|IS_PUBLISHED|segment in metadata store|
 
 Review comment:
   added more description


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



  1   2   >