I know this is a bit of a silly question. But what is the norm for
 Sparkcolumn headings? Is it camelCase or snakec_ase. For example here "
someone suggested and I quote
SumTotalInMillionGBP" accurately conveys the meaning but is a bit long and
uses camelCase, which is not the standard convention for Spark DataFrames
(usually snake_case). Use snake_case for better readability like:
"total_price_in_millions_gbp"

So this is the gist

+----------------------+---------------------+---------------------------+
|district              |NumberOfOffshoreOwned|total_price_in_millions_gbp|
+----------------------+---------------------+---------------------------+
|CITY OF WESTMINSTER   |4452                 |21472.5                    |
|KENSINGTON AND CHELSEA|2403                 |6544.8                     |
|CAMDEN                |1023                 |4275.9                     |
|SOUTHWARK             |1080                 |3938.0                     |
|ISLINGTON             |627                  |3062.0                     |
|TOWER HAMLETS         |1715                 |3008.0                     |
|HAMMERSMITH AND FULHAM|765                  |2137.2                     |

Now I recently saw a note (if i recall correctly) that Spark should be
using camelCase in new spark related documents. What are the accepted views
or does it matter?

Thanks
Mich Talebzadeh,

Technologist | Solutions Architect | Data Engineer  | Generative AI

London
United Kingdom


   view my Linkedin profile


 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: The information provided is correct to the best of my knowledge
but of course cannot be guaranteed . It is essential to note that, as with
any advice, quote "one test result is worth one-thousand expert opinions
(Werner Von Braun)".

Reply via email to