Re: Problems running TPC-H on Raspberry Pi Cluster

2019-07-12 Thread agg212
Good to know. Will look into the Raspberry Pi 4 (w/4GB RAM). In general, are there any tuning or configuration tips/tricks for very memory-constrained deployments (e.g., 1-4GB RAM)? -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

Problems running TPC-H on Raspberry Pi Cluster

2019-07-10 Thread agg212
We are trying to benchmark TPC-H (scale factor 1) on a 13-node Raspberry Pi 3B+ cluster (1 master, 12 workers). Each node has 1GB of RAM and a quad-core processor, running Ubuntu Server 18.04. The cluster is using the Spark standalone scheduler with the *.tbl files from TPCH’s dbgen tool stored in

Re: Mllib native netlib-java/OpenBLAS

2014-12-01 Thread agg212
Thanks for your reply, but I'm still running into issues installing/configuring the native libraries for MLlib. Here are the steps I've taken, please let me know if anything is incorrect. - Download Spark source - unzip and compile using `mvn -Pnetlib-lgpl -DskipTests clean package ` - Run

Mllib native netlib-java/OpenBLAS

2014-11-24 Thread agg212
Hi, i'm trying to improve performance for Spark's Mllib, and I am having trouble getting native netlib-java libraries installed/recognized by Spark. I am running on a single machine, Ubuntu 14.04 and here is what I've tried: sudo apt-get install libgfortran3 sudo apt-get install libatlas3-base

Re: Mllib native netlib-java/OpenBLAS

2014-11-24 Thread agg212
I am running it in local. How can I use the built version (in local mode) so that I can use the native libraries? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Mllib-native-netlib-java-OpenBLAS-tp19662p19705.html Sent from the Apache Spark User List

Fixed Sized Strings in Spark SQL

2014-10-27 Thread agg212
Hi, I was wondering how to implement fixed sized strings in Spark SQL. I would like to implement TPC-H, which uses fixed sized strings for certain fields (i.e., 15 character L_SHIPMODE field). Is there a way to use a fixed length char array instead of using a string? -- View this message in

Spark SQL Exists Clause

2014-10-26 Thread agg212
Hey, I'm trying to run TPC-H Query 4 (shown below), and get the following error: Exception in thread main java.lang.RuntimeException: [11.25] failure: ``UNION'' expected but `select' found It seems like Spark SQL doesn't support the exists clause. Is this true? select o_orderpriority,