Thanks Mich, sorry, I might have been a bit unclear in my original email.
The timestamps are getting loaded as 2003-11-24T09:02:32+ for example
but I want it loaded as 2003-11-24T09:02:32+1300 I know how to do this with
various transformations however I'm wondering if there's any spark or jvm
s
Hello Mich,
Thanking you for providing these useful feedbacks and responses.
We appreciate your contribution to this community forum. I for myself find your
posts insightful.
+1 for me
Best,
AK
On Wednesday, 6 September 2023 at 18:34:27 BST, Mich Talebzadeh
wrote:
Hi Varun,
In answer t
Sounds like a network issue, for example connecting to remote server?
try
ping 172.21.242.26
telnet 172.21.242.26 596590
or nc -vz 172.21.242.26 596590
example
nc -vz rhes76 1521
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 50.140.197.230:1521.
Ncat: 0 bytes sent, 0 bytes rece
Hi Varun,
In answer to your questions, these are my views. However, they are just
views and cannot be taken as facts so to speak
1.
*Focus and Time Management:* I often struggle with maintaining focus and
effectively managing my time. This leads to productivity issues and affects
my
i want to use yarn cluster with my current code. if i use
conf.set("spark.master","local[*]") inplace of
conf.set("spark.master","yarn"), everything is very well. but i try to use
yarn in setmaster, my code give an below error.
```
package com.example.pocsparkspring;
import org.apache.hadoop.con
Hi Jack,
You may use from_utc_timestamp and to_utc_timestamp to see if they help.
from pyspark.sql.functions import from_utc_timestamp
You can read your Parquet file into DF
df = spark.read.parquet('parquet_file_path')
# Convert timestamps (assuming your column name) from UTC to
Pacific/Auckla