For all the work that is necessary to load a warehouse, could not that work be 
considered a special case of CEP? Real time means I’m trying to get to zero lag 
between an event happening in the transactional system and someone being able 
to do analytics on that data but not just from that application. If that were 
the case, I’d use just an in memory solution and be done. I want the business 
to be able to do 360 analysis on their business using up to the second data.

This really shouldn’t be hard and I feel like I am missing something.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net<http://www.massstreet.net>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData

From: Jörn Franke [mailto:jornfra...@gmail.com]
Sent: Wednesday, March 1, 2017 1:25 AM
To: Adaryl Wakefield <adaryl.wakefi...@hotmail.com>
Cc: user@spark.apache.org
Subject: Re: using spark to load a data warehouse in real time

I am not sure that Spark Streaming is what you want to do. It is for streaming 
analytics not for loading in a DWH.

You need also define what realtime means and what is needed there - it will 
differ from client to client significantly.

From my experience, just SQL is not enough for the users in the future. 
Especially large data volumes require much more beyond just aggregations. These 
may become less useful in context of large data volumes. They have to learn new 
ways of dealing with the data from a business perspective by employing proper 
sampling of data from a large dataset, machine learning approaches etc. These 
are new methods which are not technically driven but business driven. I think 
it is wrong to assume that users learning new skills is a bad thing; it might 
be in the future a necessity.

On 28 Feb 2017, at 23:18, Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:
I’m actually trying to come up with a generalized use case that I can take from 
client to client. We have structured data coming from some application. Instead 
of dropping it into Hadoop and then using yet another technology to query that 
data, I just want to dump it into a relational MPP DW so nobody has to learn 
new skills or new tech just to do some analysis. Everybody and their mom can 
write SQL. Designing relational databases is a rare skill but not as rare as 
what is necessary for designing some NoSQL solutions.

I’m looking for the fastest path to move a company from batch to real time 
analytical processing.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net<http://www.massstreet.net>
www.linkedin.com/in/bobwakefieldmba<http://www.linkedin.com/in/bobwakefieldmba>
Twitter: @BobLovesData

From: Mohammad Tariq [mailto:donta...@gmail.com]
Sent: Tuesday, February 28, 2017 12:57 PM
To: Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: using spark to load a data warehouse in real time

Hi Adaryl,

You could definitely load data into a warehouse through Spark's JDBC support 
through DataFrames. Could you please explain your use case a bit more? That'll 
help us in answering your query better.




[https://thumbs.about.me/thumbnail/users/m/t/i/mti_emailsig.jpg?_1407799609_32]



Tariq, Mohammad
about.me/mti<http://about.me/mti>








Tariq, Mohammad<http://about.me/mti>
about.me/mti<http://about.me/mti>


<http://about.me/mti>

  <http://about.me/mti>

 <http://about.me/mti>

 <http://about.me/mti>
On Wed, Mar 1, 2017 at 12:15 AM, Adaryl Wakefield 
<adaryl.wakefi...@hotmail.com> wrote:<http://about.me/mti>
I haven’t heard of Kafka connect. I’ll have to look into it. Kafka would, of 
course have to be in any architecture but it looks like they are suggesting 
that Kafka is all you need. <http://about.me/mti>
 <http://about.me/mti>
My primary concern is the complexity of loading warehouses. I have a web 
development background so I have somewhat of an idea on how to insert data into 
a database from an application. I’ve since moved on to straight database 
programming and don’t work with anything that reads from an app anymore. 
<http://about.me/mti>
 <http://about.me/mti>
Loading a warehouse requires a lot of cleaning of data and running and grabbing 
keys to maintain referential integrity. Usually that’s done in a batch process. 
Now I have to do it record by record (or a few records). I have some ideas but 
I’m not quite there yet.<http://about.me/mti>
 <http://about.me/mti>
I thought SparkSQL would be the way to get this done but so far, all the 
examples I’ve seen are just SELECT statements, no INSERTS or MERGE 
statements.<http://about.me/mti>
 <http://about.me/mti>
Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685<http://about.me/mti>
www.massstreet.net<http://about.me/mti>
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData<http://about.me/mti>
 <http://about.me/mti>
From: Femi Anthony [mailto:femib...@gmail.com]
Sent: Tuesday, February 28, 2017 4:13 AM
To: Adaryl Wakefield <adaryl.wakefi...@hotmail.com>
Cc: user@spark.apache.org
Subject: Re: using spark to load a data warehouse in real 
time<http://about.me/mti>
 <http://about.me/mti>
Have you checked to see if there are any drivers to enable you to write to 
Greenplum directly from Spark ?<http://about.me/mti>
 <http://about.me/mti>
You can also take a look at this link:<http://about.me/mti>
 <http://about.me/mti>
https://groups.google.com/a/greenplum.org/forum/m/#!topic/gpdb-users/lnm0Z7WBW6Q<http://about.me/mti>
 <http://about.me/mti>
Apparently GPDB is based on Postgres so maybe that approach may work. 
<http://about.me/mti>
Another approach maybe for Spark Streaming to write to Kafka, and then have 
another process read from Kafka and write to Greenplum.<http://about.me/mti>
 <http://about.me/mti>
Kafka Connect may be useful in this case -<http://about.me/mti>
 <http://about.me/mti>
https://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines/<http://about.me/mti>
 <http://about.me/mti>
Femi Anthony<http://about.me/mti>
 <http://about.me/mti>
 <http://about.me/mti>

On Feb 27, 2017, at 7:18 PM, Adaryl Wakefield <adaryl.wakefi...@hotmail.com> 
wrote:<http://about.me/mti>
Is anybody using Spark streaming/SQL to load a relational data warehouse in 
real time? There isn’t a lot of information on this use case out there. When I 
google real time data warehouse load, nothing I find is up to date. It’s all 
turn of the century stuff and doesn’t take into account advancements in 
database technology. Additionally, whenever I try to learn spark, it’s always 
the same thing. Play with twitter data never structured data. All the CEP uses 
cases are about data science. <http://about.me/mti>
 <http://about.me/mti>
I’d like to use Spark to load Greenplumb in real time. Intuitively, this should 
be possible. I was thinking Spark Streaming with Spark SQL along with a ORM 
should do it. Am I off base with this? Is the reason why there are no examples 
is because there is a better way to do what I want?<http://about.me/mti>
 <http://about.me/mti>
Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685<http://about.me/mti>
www.massstreet.net<http://about.me/mti>
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData<http://about.me/mti>

 <http://about.me/mti>
 <http://about.me/mti>

Reply via email to