Hi all,
I am using Apache Ignite 2.4 and I've successfully saved a Spark Dataframe
as a SQL table in the Ignite caching layer.

I am trying to access the data from an external Java program (completely
unrelated to the Spark Job that produced and saved the table) using the
Cache API, as if it were a key/value store.

The table, called 'PERSON', has a primary key field called UUID and maps to
an Ignite cache called SQL_PUBLIC_PERSON.

Using the Ignite Cache API I am able to check that that a specific entry
exists in the cache calling:

cache.containsKey(...)


By the way, If I try to get the value calling cache.get(...) for a specific
key I get a ClassNotFoundException (full stacktrace is attached).

Now, I guess Ignite dinamically generated a schema bean for my DataFrame
when saving the DataFrame itself in Spark.
Since the generated bean class name also seems to be generated whith some
internal rule (in this example it's
'SQL_PUBLIC_PERSON_da18b6a2_8b41_4c34_9451_6fd9ace8e73d') I am not sure if
this usage pattern does make sense at all.

I am very new to Apache Ignite so I'd like to apologize if this is a silly
question, but I am not able to find any clue in the official documentation.

Thanks,
Luca

Attachment: stacktrace.log
Description: Binary data

Reply via email to