This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 21408718 deploy: 6444028957a98a033ff7337e19d3d1f0402fe401
21408718 is described below

commit 214087185102b3a05f23b7d144697b8a7d7f16c1
Author: danielcweeks <[email protected]>
AuthorDate: Wed Mar 22 15:19:40 2023 +0000

    deploy: 6444028957a98a033ff7337e19d3d1f0402fe401
---
 common/index.xml            |  2 +-
 getting-started/index.html  | 20 +-------------------
 index.xml                   |  2 +-
 spark-quickstart/index.html | 25 +++++++++++++++++++------
 4 files changed, 22 insertions(+), 27 deletions(-)

diff --git a/common/index.xml b/common/index.xml
index c94579f1..ce1800d1 100644
--- a/common/index.xml
+++ b/common/index.xml
@@ -1,5 +1,5 @@
 <?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" 
xmlns:atom="http://www.w3.org/2005/Atom";><channel><title>Commons on Apache 
Iceberg</title><link>https://iceberg.apache.org/common/</link><description>Recent
 content in Commons on Apache Iceberg</description><generator>Hugo -- 
gohugo.io</generator><language>en-us</language><atom:link 
href="https://iceberg.apache.org/common/index.xml"; rel="self" 
type="application/rss+xml"/><item><title>Spark and Iceberg Quickstart</t [...]
-Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/rel [...]
+Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/release 
[...]
 1.1.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.1.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.1.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.1.0 Spark 3.1 runtime Jar 1.1.0 Spark 2.4 runtime Jar 
1.1.0 Flink 1.16 runtime Jar 1.1.0 Flink 1.15 runtime Jar 1.1.0 Flink 1.14 
runtime Jar 1.1.0 Hive runtime Jar To use Iceberg in Spark or Flink, download 
the runtime JAR for your engine version and add it to the jars folder of your 
installation.</description></item><i [...]
 Running Benchmarks on GitHub It is possible to run one or more Benchmarks via 
the JMH Benchmarks GH action on your own fork of the Iceberg 
repo.</description></item><item><title>Blogs</title><link>https://iceberg.apache.org/blogs/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 
+0000</pubDate><guid>https://iceberg.apache.org/blogs/</guid><description>Iceberg
 Blogs Here is a list of company blogs that talk about Iceberg. The blogs are 
ordered from most recent to oldest.
 Understanding Iceberg Table Metadata Date: January 30st, 2023, Company: 
Snowflake
diff --git a/getting-started/index.html b/getting-started/index.html
index b3c6e0d8..52cc0335 100644
--- a/getting-started/index.html
+++ b/getting-started/index.html
@@ -1,19 +1 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-<head>
-  <meta http-equiv="Refresh" content="0; url='/docs/latest/getting-started'" />
-</head>
+<!doctype html><html 
lang=en-us><head><title>https://iceberg.apache.org/spark-quickstart/</title><link
 rel=canonical href=https://iceberg.apache.org/spark-quickstart/><meta 
name=robots content="noindex"><meta charset=utf-8><meta http-equiv=refresh 
content="0; url=https://iceberg.apache.org/spark-quickstart/";></head></html>
\ No newline at end of file
diff --git a/index.xml b/index.xml
index 2776d8f2..56ae7e2a 100644
--- a/index.xml
+++ b/index.xml
@@ -1,5 +1,5 @@
 <?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" 
xmlns:atom="http://www.w3.org/2005/Atom";><channel><title>Apache 
Iceberg</title><link>https://iceberg.apache.org/</link><description>Recent 
content on Apache Iceberg</description><generator>Hugo -- 
gohugo.io</generator><language>en-us</language><atom:link 
href="https://iceberg.apache.org/index.xml"; rel="self" 
type="application/rss+xml"/><item><title>Expressive 
SQL</title><link>https://iceberg.apache.org/services/exp [...]
-Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/rel [...]
+Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/release 
[...]
 1.1.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.1.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.1.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.1.0 Spark 3.1 runtime Jar 1.1.0 Spark 2.4 runtime Jar 
1.1.0 Flink 1.16 runtime Jar 1.1.0 Flink 1.15 runtime Jar 1.1.0 Flink 1.14 
runtime Jar 1.1.0 Hive runtime Jar To use Iceberg in Spark or Flink, download 
the runtime JAR for your engine version and add it to the jars folder of your 
installation.</description></item><i [...]
 Running Benchmarks on GitHub It is possible to run one or more Benchmarks via 
the JMH Benchmarks GH action on your own fork of the Iceberg 
repo.</description></item><item><title>Blogs</title><link>https://iceberg.apache.org/blogs/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 
+0000</pubDate><guid>https://iceberg.apache.org/blogs/</guid><description>Iceberg
 Blogs Here is a list of company blogs that talk about Iceberg. The blogs are 
ordered from most recent to oldest.
 Understanding Iceberg Table Metadata Date: January 30st, 2023, Company: 
Snowflake
diff --git a/spark-quickstart/index.html b/spark-quickstart/index.html
index 0c4c1137..40908f12 100644
--- a/spark-quickstart/index.html
+++ b/spark-quickstart/index.html
@@ -4,7 +4,7 @@
 <span class=icon-bar></span>
 <span class=icon-bar></span></button>
 <a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img 
class=top-navbar-logo 
src=https://iceberg.apache.org//img/iceberg-logo-icon.png> Apache 
Iceberg</a></div><div><input type=search class=form-control id=search-input 
placeholder=Search... maxlength=64 data-hotkeys=s/></div><div 
class=versions-dropdown><span>1.1.0</span> <i class="fa 
fa-chevron-down"></i><div class=versions-dropdown-content><ul><li 
class=versions-dropdown-selection><a href=/docs/latest>latest</a></li> [...]
-highlight some powerful features. You can learn more about Iceberg&rsquo;s 
Spark runtime by checking out the <a href=../docs/latest/spark-ddl/>Spark</a> 
section.</p><ul><li><a href=#docker-compose>Docker-Compose</a></li><li><a 
href=#creating-a-table>Creating a table</a></li><li><a 
href=#writing-data-to-a-table>Writing Data to a Table</a></li><li><a 
href=#reading-data-from-a-table>Reading Data from a Table</a></li><li><a 
href=#adding-a-catalog>Adding A Catalog</a></li><li><a href=#next-st [...]
+highlight some powerful features. You can learn more about Iceberg&rsquo;s 
Spark runtime by checking out the <a href=../docs/latest/spark-ddl/>Spark</a> 
section.</p><ul><li><a href=#docker-compose>Docker-Compose</a></li><li><a 
href=#creating-a-table>Creating a table</a></li><li><a 
href=#writing-data-to-a-table>Writing Data to a Table</a></li><li><a 
href=#reading-data-from-a-table>Reading Data from a Table</a></li><li><a 
href=#adding-a-catalog>Adding A Catalog</a></li><li><a href=#next-st [...]
 which contains a local Spark cluster with a configured Iceberg catalog. To use 
this, you&rsquo;ll need to install the <a 
href=https://docs.docker.com/get-docker/>Docker CLI</a> as well as the <a 
href=https://github.com/docker/compose-cli/blob/main/INSTALL.md>Docker Compose 
CLI</a>.</p><p>Once you have those, save the yaml below into a file named 
<code>docker-compose.yml</code>:</p><div class=highlight><pre tabindex=0 
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-siz [...]
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#f92672>services</span>:
@@ -12,6 +12,8 @@ which contains a local Spark cluster with a configured 
Iceberg catalog. To use t
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>image</span>: <span 
style=color:#ae81ff>tabulario/spark-iceberg</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>container_name</span>: <span 
style=color:#ae81ff>spark-iceberg</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>build</span>: <span style=color:#ae81ff>spark/</span>
+</span></span><span style=display:flex><span>    <span 
style=color:#f92672>networks</span>:
+</span></span><span style=display:flex><span>      <span 
style=color:#f92672>iceberg_net</span>:
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>depends_on</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>rest</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>minio</span>
@@ -25,18 +27,20 @@ which contains a local Spark cluster with a configured 
Iceberg catalog. To use t
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>ports</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>8888</span>:<span style=color:#ae81ff>8888</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>8080</span>:<span style=color:#ae81ff>8080</span>
-</span></span><span style=display:flex><span>    <span 
style=color:#f92672>links</span>:
-</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>rest:rest</span>
-</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>minio:minio</span>
+</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>10000</span>:<span style=color:#ae81ff>10000</span>
+</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>10001</span>:<span style=color:#ae81ff>10001</span>
 </span></span><span style=display:flex><span>  <span 
style=color:#f92672>rest</span>:
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>image</span>: <span 
style=color:#ae81ff>tabulario/iceberg-rest</span>
+</span></span><span style=display:flex><span>    <span 
style=color:#f92672>container_name</span>: <span 
style=color:#ae81ff>iceberg-rest</span>
+</span></span><span style=display:flex><span>    <span 
style=color:#f92672>networks</span>:
+</span></span><span style=display:flex><span>      <span 
style=color:#f92672>iceberg_net</span>:
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>ports</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>8181</span>:<span style=color:#ae81ff>8181</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>environment</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>AWS_ACCESS_KEY_ID=admin</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>AWS_SECRET_ACCESS_KEY=password</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>AWS_REGION=us-east-1</span>
-</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>CATALOG_WAREHOUSE=s3a://warehouse/wh/</span>
+</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>CATALOG_WAREHOUSE=s3://warehouse/</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>CATALOG_IO__IMPL=org.apache.iceberg.aws.s3.S3FileIO</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>CATALOG_S3_ENDPOINT=http://minio:9000</span>
 </span></span><span style=display:flex><span>  <span 
style=color:#f92672>minio</span>:
@@ -45,6 +49,11 @@ which contains a local Spark cluster with a configured 
Iceberg catalog. To use t
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>environment</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>MINIO_ROOT_USER=admin</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>MINIO_ROOT_PASSWORD=password</span>
+</span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>MINIO_DOMAIN=minio</span>
+</span></span><span style=display:flex><span>    <span 
style=color:#f92672>networks</span>:
+</span></span><span style=display:flex><span>      <span 
style=color:#f92672>iceberg_net</span>:
+</span></span><span style=display:flex><span>        <span 
style=color:#f92672>aliases</span>:
+</span></span><span style=display:flex><span>          - <span 
style=color:#ae81ff>warehouse.minio</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>ports</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>9001</span>:<span style=color:#ae81ff>9001</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>9000</span>:<span style=color:#ae81ff>9000</span>
@@ -54,6 +63,8 @@ which contains a local Spark cluster with a configured 
Iceberg catalog. To use t
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>minio</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>image</span>: <span style=color:#ae81ff>minio/mc</span>
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>container_name</span>: <span style=color:#ae81ff>mc</span>
+</span></span><span style=display:flex><span>    <span 
style=color:#f92672>networks</span>:
+</span></span><span style=display:flex><span>      <span 
style=color:#f92672>iceberg_net</span>:
 </span></span><span style=display:flex><span>    <span 
style=color:#f92672>environment</span>:
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>AWS_ACCESS_KEY_ID=admin</span>
 </span></span><span style=display:flex><span>      - <span 
style=color:#ae81ff>AWS_SECRET_ACCESS_KEY=password</span>
@@ -64,8 +75,10 @@ which contains a local Spark cluster with a configured 
Iceberg catalog. To use t
 </span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     /usr/bin/mc rm -r --force minio/warehouse;
 </span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     /usr/bin/mc mb minio/warehouse;
 </span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     /usr/bin/mc policy set public minio/warehouse;
-</span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     exit 0;
+</span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     tail -f /dev/null
 </span></span></span><span style=display:flex><span><span style=color:#e6db74> 
     &#34;</span>      
+</span></span><span style=display:flex><span><span 
style=color:#f92672>networks</span>:
+</span></span><span style=display:flex><span>  <span 
style=color:#f92672>iceberg_net</span>:
 </span></span></code></pre></div><p>Next, start up the docker containers with 
this command:</p><div class=highlight><pre tabindex=0 
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
 class=language-sh data-lang=sh><span style=display:flex><span>docker-compose up
 </span></span></code></pre></div><p>You can then run any of the following 
commands to start a Spark session.</p><div class=codetabs><input id=spark-sql 
type=radio name=LaunchSparkClient 
onclick='selectExampleLanguage("spark-queries","spark-sql")'>
 <label for=spark-sql>SparkSQL</label>

Reply via email to