This is an automated email from the ASF dual-hosted git repository.

comphead pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/datafusion.git


The following commit(s) were added to refs/heads/main by this push:
     new 5d44685794 Doc: Modify docs to fix old naming (#10199)
5d44685794 is described below

commit 5d4468579481b36e53333c62bfd2440af1de1155
Author: comphead <[email protected]>
AuthorDate: Tue Apr 23 12:12:57 2024 -0700

    Doc: Modify docs to fix old naming (#10199)
    
    * fix docs on datafusion names
    
    * fix code
    
    * fmt
    
    * Update dev/release/README.md
    
    Co-authored-by: Andy Grove <[email protected]>
    
    * Update dev/release/README.md
    
    Co-authored-by: Andy Grove <[email protected]>
    
    * Update dev/release/README.md
    
    Co-authored-by: Andy Grove <[email protected]>
    
    ---------
    
    Co-authored-by: Andy Grove <[email protected]>
---
 datafusion-cli/src/catalog.rs                                  |  5 ++---
 datafusion-examples/README.md                                  |  2 +-
 datafusion-examples/examples/flight/flight_sql_server.rs       |  2 +-
 datafusion/sqllogictest/README.md                              |  2 +-
 .../sqllogictest/src/engines/datafusion_engine/normalize.rs    | 10 +++++-----
 dev/release/README.md                                          |  8 ++++----
 docs/source/contributor-guide/communication.md                 |  2 +-
 docs/source/user-guide/example-usage.md                        |  2 +-
 docs/source/user-guide/faq.md                                  |  2 +-
 docs/source/user-guide/introduction.md                         |  4 ++--
 10 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/datafusion-cli/src/catalog.rs b/datafusion-cli/src/catalog.rs
index 0fbb7a5908..faa657da65 100644
--- a/datafusion-cli/src/catalog.rs
+++ b/datafusion-cli/src/catalog.rs
@@ -345,10 +345,9 @@ mod tests {
             if cfg!(windows) { "USERPROFILE" } else { "HOME" },
             test_home_path,
         );
-        let input =
-            
"~/Code/arrow-datafusion/benchmarks/data/tpch_sf1/part/part-0.parquet";
+        let input = 
"~/Code/datafusion/benchmarks/data/tpch_sf1/part/part-0.parquet";
         let expected = format!(
-            
"{}{}Code{}arrow-datafusion{}benchmarks{}data{}tpch_sf1{}part{}part-0.parquet",
+            
"{}{}Code{}datafusion{}benchmarks{}data{}tpch_sf1{}part{}part-0.parquet",
             test_home_path,
             MAIN_SEPARATOR,
             MAIN_SEPARATOR,
diff --git a/datafusion-examples/README.md b/datafusion-examples/README.md
index 5c596d1cda..4b0e64ebdb 100644
--- a/datafusion-examples/README.md
+++ b/datafusion-examples/README.md
@@ -31,7 +31,7 @@ To run the examples, use the `cargo run` command, such as:
 
 ```bash
 git clone https://github.com/apache/datafusion
-cd arrow-datafusion
+cd datafusion
 # Download test data
 git submodule update --init
 
diff --git a/datafusion-examples/examples/flight/flight_sql_server.rs 
b/datafusion-examples/examples/flight/flight_sql_server.rs
index ed9457643b..f04a559d00 100644
--- a/datafusion-examples/examples/flight/flight_sql_server.rs
+++ b/datafusion-examples/examples/flight/flight_sql_server.rs
@@ -73,7 +73,7 @@ macro_rules! status {
 ///
 /// JDBC connection string: "jdbc:arrow-flight-sql://127.0.0.1:50051/"
 ///
-/// Based heavily on Ballista's implementation: 
https://github.com/apache/arrow-ballista/blob/main/ballista/scheduler/src/flight_sql.rs
+/// Based heavily on Ballista's implementation: 
https://github.com/apache/datafusion-ballista/blob/main/ballista/scheduler/src/flight_sql.rs
 /// and the example in arrow-rs: 
https://github.com/apache/arrow-rs/blob/master/arrow-flight/examples/flight_sql_server.rs
 ///
 #[tokio::main]
diff --git a/datafusion/sqllogictest/README.md 
b/datafusion/sqllogictest/README.md
index 5a900cb994..930df47967 100644
--- a/datafusion/sqllogictest/README.md
+++ b/datafusion/sqllogictest/README.md
@@ -225,7 +225,7 @@ query <type_string> <sort_mode>
 <expected_result>
 ```
 
-- `test_name`: Uniquely identify the test name (arrow-datafusion only)
+- `test_name`: Uniquely identify the test name (Datafusion only)
 - `type_string`: A short string that specifies the number of result columns 
and the expected datatype of each result
   column. There is one character in the <type_string> for each result column. 
The characters codes are:
   - 'B' - **B**oolean,
diff --git a/datafusion/sqllogictest/src/engines/datafusion_engine/normalize.rs 
b/datafusion/sqllogictest/src/engines/datafusion_engine/normalize.rs
index 7eef1f020f..520b6b53b3 100644
--- a/datafusion/sqllogictest/src/engines/datafusion_engine/normalize.rs
+++ b/datafusion/sqllogictest/src/engines/datafusion_engine/normalize.rs
@@ -142,21 +142,21 @@ fn normalize_paths(mut row: Vec<String>) -> Vec<String> {
 fn workspace_root() -> &'static object_store::path::Path {
     static WORKSPACE_ROOT_LOCK: OnceLock<object_store::path::Path> = 
OnceLock::new();
     WORKSPACE_ROOT_LOCK.get_or_init(|| {
-        // e.g. /Software/arrow-datafusion/datafusion/core
+        // e.g. /Software/datafusion/datafusion/core
         let dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
 
-        // e.g. /Software/arrow-datafusion/datafusion
+        // e.g. /Software/datafusion/datafusion
         let workspace_root = dir
             .parent()
             .expect("Can not find parent of datafusion/core")
-            // e.g. /Software/arrow-datafusion
+            // e.g. /Software/datafusion
             .parent()
             .expect("parent of datafusion")
             .to_string_lossy();
 
         let sanitized_workplace_root = if cfg!(windows) {
-            // Object store paths are delimited with `/`, e.g. 
`D:/a/arrow-datafusion/arrow-datafusion/testing/data/csv/aggregate_test_100.csv`.
-            // The default windows delimiter is `\`, so the workplace path is 
`D:\a\arrow-datafusion\arrow-datafusion`.
+            // Object store paths are delimited with `/`, e.g. 
`/datafusion/datafusion/testing/data/csv/aggregate_test_100.csv`.
+            // The default windows delimiter is `\`, so the workplace path is 
`datafusion\datafusion`.
             workspace_root
                 .replace(std::path::MAIN_SEPARATOR, 
object_store::path::DELIMITER)
         } else {
diff --git a/dev/release/README.md b/dev/release/README.md
index c1701062ca..32735588ed 100644
--- a/dev/release/README.md
+++ b/dev/release/README.md
@@ -223,7 +223,7 @@ Here is my vote:
 +1
 
 [1]: 
https://github.com/apache/datafusion/tree/a5dd428f57e62db20a945e8b1895de91405958c4
-[2]: https://dist.apache.org/repos/dist/dev/arrow/apache-arrow-datafusion-5.1.0
+[2]: https://dist.apache.org/repos/dist/dev/arrow/apache-datafusion-5.1.0
 [3]: 
https://github.com/apache/datafusion/blob/a5dd428f57e62db20a945e8b1895de91405958c4/CHANGELOG.md
 ```
 
@@ -249,7 +249,7 @@ NOTE: steps in this section can only be done by PMC members.
 ### After the release is approved
 
 Move artifacts to the release location in SVN, e.g.
-https://dist.apache.org/repos/dist/release/arrow/arrow-datafusion-5.1.0/, using
+https://dist.apache.org/repos/dist/release/datafusion/datafusion-5.1.0/, using
 the `release-tarball.sh` script:
 
 ```shell
@@ -437,7 +437,7 @@ svn ls https://dist.apache.org/repos/dist/dev/arrow | grep 
datafusion
 Delete a release candidate:
 
 ```bash
-svn delete -m "delete old DataFusion RC" 
https://dist.apache.org/repos/dist/dev/arrow/apache-arrow-datafusion-7.1.0-rc1/
+svn delete -m "delete old DataFusion RC" 
https://dist.apache.org/repos/dist/dev/datafusion/apache-datafusion-7.1.0-rc1/
 ```
 
 #### Deleting old releases from `release` svn
@@ -453,7 +453,7 @@ svn ls https://dist.apache.org/repos/dist/release/arrow | 
grep datafusion
 Delete a release:
 
 ```bash
-svn delete -m "delete old DataFusion release" 
https://dist.apache.org/repos/dist/release/arrow/arrow-datafusion-7.0.0
+svn delete -m "delete old DataFusion release" 
https://dist.apache.org/repos/dist/release/datafusion/datafusion-7.0.0
 ```
 
 ### Publish the User Guide to the Arrow Site
diff --git a/docs/source/contributor-guide/communication.md 
b/docs/source/contributor-guide/communication.md
index 6e8e28cee3..96f5e61d10 100644
--- a/docs/source/contributor-guide/communication.md
+++ b/docs/source/contributor-guide/communication.md
@@ -37,7 +37,7 @@ We use the Slack and Discord platforms for informal 
discussions and coordination
 meet other contributors and get guidance on where to contribute. It is 
important to note that any technical designs and
 decisions are made fully in the open, on GitHub.
 
-Most of us use the `#arrow-datafusion` and `#arrow-rust` channels in the [ASF 
Slack workspace](https://s.apache.org/slack-invite) .
+Most of us use the `#datafusion` and `#arrow-rust` channels in the [ASF Slack 
workspace](https://s.apache.org/slack-invite) .
 Unfortunately, due to spammers, the ASF Slack workspace requires an invitation 
to join. To get an invitation,
 request one in the `Arrow Rust` channel of the [Arrow Rust Discord 
server](https://discord.gg/Qw5gKqHxUM).
 
diff --git a/docs/source/user-guide/example-usage.md 
b/docs/source/user-guide/example-usage.md
index 25b398461f..2fb4e55d69 100644
--- a/docs/source/user-guide/example-usage.md
+++ b/docs/source/user-guide/example-usage.md
@@ -274,7 +274,7 @@ backtrace:    0: 
std::backtrace_rs::backtrace::libunwind::trace
    3: std::backtrace::Backtrace::capture
              at 
/rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/backtrace.rs:298:9
    4: datafusion_common::error::DataFusionError::get_back_trace
-             at /arrow-datafusion/datafusion/common/src/error.rs:436:30
+             at /datafusion/datafusion/common/src/error.rs:436:30
    5: datafusion_sql::expr::function::<impl 
datafusion_sql::planner::SqlToRel<S>>::sql_function_to_expr
    ............
 ```
diff --git a/docs/source/user-guide/faq.md b/docs/source/user-guide/faq.md
index fbc25f0b72..d803b11333 100644
--- a/docs/source/user-guide/faq.md
+++ b/docs/source/user-guide/faq.md
@@ -28,7 +28,7 @@ DataFusion is a library for executing queries in-process 
using the Apache Arrow
 model and computational kernels. It is designed to run within a single 
process, using threads
 for parallel query execution.
 
-[Ballista](https://github.com/apache/arrow-ballista) is a distributed compute 
platform built on DataFusion.
+[Ballista](https://github.com/apache/datafusion-ballista) is a distributed 
compute platform built on DataFusion.
 
 # How does DataFusion Compare with `XYZ`?
 
diff --git a/docs/source/user-guide/introduction.md 
b/docs/source/user-guide/introduction.md
index a3fefdc56a..676543b040 100644
--- a/docs/source/user-guide/introduction.md
+++ b/docs/source/user-guide/introduction.md
@@ -95,7 +95,7 @@ Here are some active projects using DataFusion:
  <!-- "Active" means github repositories that had at least one commit in the 
last 6 months -->
 
 - [Arroyo](https://github.com/ArroyoSystems/arroyo) Distributed stream 
processing engine in Rust
-- [Ballista](https://github.com/apache/arrow-ballista) Distributed SQL Query 
Engine
+- [Ballista](https://github.com/apache/datafusion-ballista) Distributed SQL 
Query Engine
 - [Comet](https://github.com/apache/datafusion-comet) Apache Spark native 
query execution plugin
 - [CnosDB](https://github.com/cnosdb/cnosdb) Open Source Distributed Time 
Series Database
 - [Cube Store](https://github.com/cube-js/cube.js/tree/master/rust)
@@ -129,7 +129,7 @@ Here are some less active projects that used DataFusion:
 - [Flock](https://github.com/flock-lab/flock)
 - [Tensorbase](https://github.com/tensorbase/tensorbase)
 
-[ballista]: https://github.com/apache/arrow-ballista
+[ballista]: https://github.com/apache/datafusion-ballista
 [blaze]: https://github.com/blaze-init/blaze
 [cloudfuse buzz]: https://github.com/cloudfuse-io/buzz-rust
 [cnosdb]: https://github.com/cnosdb/cnosdb


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to