Re: Looping through a series of telephone numbers

2023-04-02 Thread Mich Talebzadeh
Hi Philippe,

Broadcast variables allow the programmer to keep a read-only variable
cached on each machine rather than shipping a copy of it with tasks. They
can be used, for example, to give every node a copy of a large input
dataset in an efficient manner. Spark also attempts to distribute broadcast
variables using efficient broadcast algorithms to reduce communication cost.

If you have enough memory, the smaller table is cached in the driver and
distributed to every node of the cluster, reduning shift and lift of data
check this link

https://sparkbyexamples.com/spark/broadcast-join-in-spark/#:~:text=Broadcast%20join%20is%20an%20optimization,always%20collected%20at%20the%20driver
.

HTH

Mich Talebzadeh,
Lead Solutions Architect/Engineering Lead
Palantir Technologies Limited


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 2 Apr 2023 at 20:05, Philippe de Rochambeau  wrote:

> Hi Mich,
> what exactly do you mean by « if you prefer to broadcast the reference
> data »?
> Philippe
>
> Le 2 avr. 2023 à 18:16, Mich Talebzadeh  a
> écrit :
>
> Hi Phillipe,
>
> These are my thoughts besides comments from Sean
>
> Just to clarify, you receive a CSV file periodically and you already have
> a file that contains valid patterns for phone numbers (reference)
>
> In a pseudo language you can probe your csv DF against the reference DF
>
> // load your reference dataframeval 
> reference_DF=sqlContext.parquetFile("path")
> // mark this smaller dataframe to be stored in memoryreference_DF.cache()
>
> //Create a temp table
>
> reference_DF.createOrReplaceTempView("reference")
>
> // Do the same on the CSV, change the line below
>
> val csvDF = 
> spark.read.format("com.databricks.spark.csv").option("inferSchema", 
> "true").option("header", "false").load("path")
>
> csvDF.cache()  // This may or not work if CSV is large, however it is worth 
> trying
>
> csvDF.createOrReplaceTempView("csv")
>
> sqlContext.sql("JOIN Query").show
>
> If you prefer to broadcast the reference data, you must first collect it on 
> the driver before you broadcast it. This requires that your RDD fits in 
> memory on your driver (and executors).
>
> You can then play around with that join.
>
> HTH
>
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 2 Apr 2023 at 09:17, Philippe de Rochambeau 
> wrote:
>
>> Many thanks, Mich.
>> Is « foreach »  the best construct to  lookup items is a dataset  such as
>> the below «  telephonedirectory » data set?
>>
>> val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  
>> tel3 » …)) // the telephone sequence
>>
>> // was read for a CSV file
>>
>> val ds = spark.read.parquet(«  /path/to/telephonedirectory » )
>>
>>   rdd .foreach(tel => {
>> longAcc.select(«  * » ).rlike(«  + »  + tel)
>>   })
>>
>>
>>
>>
>> Le 1 avr. 2023 à 22:36, Mich Talebzadeh  a
>> écrit :
>>
>> This may help
>>
>> Spark rlike() Working with Regex Matching Example
>> s
>> Mich Talebzadeh,
>> Lead Solutions Architect/Engineering Lead
>> Palantir Technologies Limited
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sat, 1 Apr 2023 at 19:32, Philippe de Rochambeau 
>> wrote:
>>
>>> Hello,
>>> I’m looking for an efficient way in Spark to search for a series of
>>> telephone numbers, contained in a CSV file, in a data set column.
>>>
>>> In pseudo code,
>>>
>>> for tel in [tel1, tel2, …. tel40,000]
>>> search for tel in dataset using .like(« %tel% »)
>>> end for
>>>
>>> I’m using the like function because the 

Re: Looping through a series of telephone numbers

2023-04-02 Thread Philippe de Rochambeau
Hi Mich,
what exactly do you mean by « if you prefer to broadcast the reference data »?
Philippe

> Le 2 avr. 2023 à 18:16, Mich Talebzadeh  a écrit :
> 
> Hi Phillipe,
> 
> These are my thoughts besides comments from Sean
> 
> Just to clarify, you receive a CSV file periodically and you already have a 
> file that contains valid patterns for phone numbers (reference)
> 
> In a pseudo language you can probe your csv DF against the reference DF
> 
> // load your reference dataframe
> val reference_DF=sqlContext.parquetFile("path")
> 
> // mark this smaller dataframe to be stored in memory
> reference_DF.cache()
> //Create a temp table
> reference_DF.createOrReplaceTempView("reference")
> // Do the same on the CSV, change the line below
> val csvDF = 
> spark.read.format("com.databricks.spark.csv").option("inferSchema", 
> "true").option("header", "false").load("path")
> csvDF.cache()  // This may or not work if CSV is large, however it is worth 
> trying
> csvDF.createOrReplaceTempView("csv")
> sqlContext.sql("JOIN Query").show
> If you prefer to broadcast the reference data, you must first collect it on 
> the driver before you broadcast it. This requires that your RDD fits in 
> memory on your driver (and executors).
> 
> You can then play around with that join.
> HTH
> 
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
> 
>view my Linkedin profile 
> 
> 
>  https://en.everybodywiki.com/Mich_Talebzadeh
> 
>  
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
> 
> On Sun, 2 Apr 2023 at 09:17, Philippe de Rochambeau  > wrote:
>> Many thanks, Mich.
>> Is « foreach »  the best construct to  lookup items is a dataset  such as 
>> the below «  telephonedirectory » data set?
>> 
>> val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  
>> tel3 » …)) // the telephone sequence
>> // was read for a CSV file
>> val ds = spark.read.parquet(«  /path/to/telephonedirectory » )
>>   
>>   rdd .foreach(tel => {
>> longAcc.select(«  * » ).rlike(«  + »  + tel)
>>   })
>> 
>> 
>> 
>>> Le 1 avr. 2023 à 22:36, Mich Talebzadeh >> > a écrit :
>>> 
>>> This may help
>>> 
>>> Spark rlike() Working with Regex Matching Example 
>>> s
>>> Mich Talebzadeh,
>>> Lead Solutions Architect/Engineering Lead
>>> Palantir Technologies Limited
>>> 
>>>view my Linkedin profile 
>>> 
>>> 
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>> 
>>>  
>>> Disclaimer: Use it at your own risk. Any and all responsibility for any 
>>> loss, damage or destruction of data or any other property which may arise 
>>> from relying on this email's technical content is explicitly disclaimed. 
>>> The author will in no case be liable for any monetary damages arising from 
>>> such loss, damage or destruction.
>>>  
>>> 
>>> 
>>> On Sat, 1 Apr 2023 at 19:32, Philippe de Rochambeau >> > wrote:
 Hello,
 I’m looking for an efficient way in Spark to search for a series of 
 telephone numbers, contained in a CSV file, in a data set column.
 
 In pseudo code,
 
 for tel in [tel1, tel2, …. tel40,000] 
 search for tel in dataset using .like(« %tel% »)
 end for 
 
 I’m using the like function because the telephone numbers in the data set 
 main contain prefixes, such as « + « ; e.g., « +331222 ».
 
 Any suggestions would be welcome.
 
 Many thanks.
 
 Philippe
 
 
 
 
 
 -
 To unsubscribe e-mail: user-unsubscr...@spark.apache.org 
 
 
>> 



Re: Looping through a series of telephone numbers

2023-04-02 Thread Philippe de Rochambeau
Wow, you guys, Anastasios, Bjørn and Mich, are stars!
Thank you very much for your suggestions. I’m going to print them and study 
them closely.


> Le 2 avr. 2023 à 20:05, Anastasios Zouzias  a écrit :
> 
> Hi Philippe,
> 
> I would like to draw your attention to this great library that saved my day 
> in the past when parsing phone numbers in Spark: 
> 
> https://github.com/google/libphonenumber
> 
> If you combine it with Bjørn's suggestions you will have a good start on your 
> linkage task.
> 
> Best regards,
> Anastasios Zouzias
> 
> 
> On Sat, Apr 1, 2023 at 8:31 PM Philippe de Rochambeau  > wrote:
>> Hello,
>> I’m looking for an efficient way in Spark to search for a series of 
>> telephone numbers, contained in a CSV file, in a data set column.
>> 
>> In pseudo code,
>> 
>> for tel in [tel1, tel2, …. tel40,000] 
>> search for tel in dataset using .like(« %tel% »)
>> end for 
>> 
>> I’m using the like function because the telephone numbers in the data set 
>> main contain prefixes, such as « + « ; e.g., « +331222 ».
>> 
>> Any suggestions would be welcome.
>> 
>> Many thanks.
>> 
>> Philippe
>> 
>> 
>> 
>> 
>> 
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org 
>> 
>> 
> 
> 
> -- 
> -- Anastasios Zouzias
>  


Re: Looping through a series of telephone numbers

2023-04-02 Thread Anastasios Zouzias
Hi Philippe,

I would like to draw your attention to this great library that saved my day
in the past when parsing phone numbers in Spark:

https://github.com/google/libphonenumber

If you combine it with Bjørn's suggestions you will have a good start on
your linkage task.

Best regards,
Anastasios Zouzias


On Sat, Apr 1, 2023 at 8:31 PM Philippe de Rochambeau 
wrote:

> Hello,
> I’m looking for an efficient way in Spark to search for a series of
> telephone numbers, contained in a CSV file, in a data set column.
>
> In pseudo code,
>
> for tel in [tel1, tel2, …. tel40,000]
> search for tel in dataset using .like(« %tel% »)
> end for
>
> I’m using the like function because the telephone numbers in the data set
> main contain prefixes, such as « + « ; e.g., « +331222 ».
>
> Any suggestions would be welcome.
>
> Many thanks.
>
> Philippe
>
>
>
>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
-- Anastasios Zouzias



Re: Looping through a series of telephone numbers

2023-04-02 Thread Bjørn Jørgensen
dataset.csv
id,tel_in_dataset
1,+33
2,+331222
3,+331333
4,+331222
5,+331222
6,+331444
7,+331222
8,+331555

telephone_numbers.csv
tel
+331222
+331222
+331222
+331222



start spark with all of yous cpu and ram

import os
import multiprocessing
from pyspark import SparkConf, SparkContext
from pyspark import pandas as ps
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, concat, concat_ws, expr, lit, trim,
regexp_replace
from pyspark.sql.types import IntegerType, StringType, StructField,
StructType

os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"

number_cores = int(multiprocessing.cpu_count())

mem_bytes = os.sysconf("SC_PAGE_SIZE") * os.sysconf("SC_PHYS_PAGES")  #
e.g. 4015976448
memory_gb = int(mem_bytes / (1024.0**3))  # e.g. 3.74


def get_spark_session(app_name: str, conf: SparkConf):
conf.setMaster("local[{}]".format(number_cores))
conf.set("spark.driver.memory", "{}g".format(memory_gb)).set(
"spark.sql.adaptive.enabled", "True"
).set(
"spark.serializer", "org.apache.spark.serializer.KryoSerializer"
).set(
"spark.sql.repl.eagerEval.maxNumRows", "1"
).set(
"sc.setLogLevel", "ERROR"
)

return
SparkSession.builder.appName(app_name).config(conf=conf).getOrCreate()


spark = get_spark_session("My app", SparkConf())
spark.sparkContext.setLogLevel("ERROR")




#pandas API on spark
tel_df = ps.read_csv("telephone_numbers.csv")

tel_df['tel'] = tel_df['tel'].astype(str)
tel_df['cleaned_tel'] = tel_df['tel'].str.replace('+', '', regex=False)

dataset_df = ps.read_csv("dataset.csv")
dataset_df['tel_in_dataset'] = dataset_df['tel_in_dataset'].astype(str)

dataset_df['cleaned_tel_in_dataset'] =
dataset_df['tel_in_dataset'].str.replace('+', '', regex=False)

filtered_df =
dataset_df[dataset_df['cleaned_tel_in_dataset'].isin(tel_df['cleaned_tel'].to_list())]

filtered_df.head()


idtel_in_datasetcleaned_tel_in_dataset
1 2 331222 331222
3 4 331222 331222
4 5 331222 331222
6 7 331222 331222


#pyspark
tel_df = spark.read.csv("telephone_numbers.csv", header=True)
tel_df = tel_df.withColumn("cleaned_tel", regexp_replace(col("tel"), "\\+",
""))

dataset_df = spark.read.csv("dataset.csv", header=True)
dataset_df = dataset_df.withColumn("cleaned_tel_in_dataset",
regexp_replace(col("tel_in_dataset"), "\\+", ""))

filtered_df =
dataset_df.where(col("cleaned_tel_in_dataset").isin([row.cleaned_tel for
row in tel_df.collect()]))

filtered_df.show()


+---+--+--+
| id|tel_in_dataset|cleaned_tel_in_dataset|
+---+--+--+
|  2|   +331222|331222|
|  4|   +331222|331222|
|  5|   +331222|331222|
|  7|   +331222|331222|
+---+--+--+




søn. 2. apr. 2023 kl. 18:18 skrev Mich Talebzadeh :

> Hi Phillipe,
>
> These are my thoughts besides comments from Sean
>
> Just to clarify, you receive a CSV file periodically and you already have
> a file that contains valid patterns for phone numbers (reference)
>
> In a pseudo language you can probe your csv DF against the reference DF
>
> // load your reference dataframeval 
> reference_DF=sqlContext.parquetFile("path")
> // mark this smaller dataframe to be stored in memoryreference_DF.cache()
>
> //Create a temp table
>
> reference_DF.createOrReplaceTempView("reference")
>
> // Do the same on the CSV, change the line below
>
> val csvDF = 
> spark.read.format("com.databricks.spark.csv").option("inferSchema", 
> "true").option("header", "false").load("path")
>
> csvDF.cache()  // This may or not work if CSV is large, however it is worth 
> trying
>
> csvDF.createOrReplaceTempView("csv")
>
> sqlContext.sql("JOIN Query").show
>
> If you prefer to broadcast the reference data, you must first collect it on 
> the driver before you broadcast it. This requires that your RDD fits in 
> memory on your driver (and executors).
>
> You can then play around with that join.
>
> HTH
>
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 2 Apr 2023 at 09:17, Philippe de Rochambeau 
> wrote:
>
>> Many thanks, Mich.
>> Is « foreach »  the best construct to  lookup items is a dataset  such as
>> the below «  telephonedirectory » data set?
>>
>> val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  
>> tel3 » 

Re: Looping through a series of telephone numbers

2023-04-02 Thread Mich Talebzadeh
Hi Phillipe,

These are my thoughts besides comments from Sean

Just to clarify, you receive a CSV file periodically and you already have a
file that contains valid patterns for phone numbers (reference)

In a pseudo language you can probe your csv DF against the reference DF

// load your reference dataframeval reference_DF=sqlContext.parquetFile("path")
// mark this smaller dataframe to be stored in memoryreference_DF.cache()

//Create a temp table

reference_DF.createOrReplaceTempView("reference")

// Do the same on the CSV, change the line below

val csvDF = spark.read.format("com.databricks.spark.csv").option("inferSchema",
"true").option("header", "false").load("path")

csvDF.cache()  // This may or not work if CSV is large, however it is
worth trying

csvDF.createOrReplaceTempView("csv")

sqlContext.sql("JOIN Query").show

If you prefer to broadcast the reference data, you must first collect
it on the driver before you broadcast it. This requires that your RDD
fits in memory on your driver (and executors).

You can then play around with that join.

HTH

Mich Talebzadeh,
Lead Solutions Architect/Engineering Lead
Palantir Technologies Limited


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 2 Apr 2023 at 09:17, Philippe de Rochambeau  wrote:

> Many thanks, Mich.
> Is « foreach »  the best construct to  lookup items is a dataset  such as
> the below «  telephonedirectory » data set?
>
> val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  
> tel3 » …)) // the telephone sequence
>
> // was read for a CSV file
>
> val ds = spark.read.parquet(«  /path/to/telephonedirectory » )
>
>   rdd .foreach(tel => {
> longAcc.select(«  * » ).rlike(«  + »  + tel)
>   })
>
>
>
>
> Le 1 avr. 2023 à 22:36, Mich Talebzadeh  a
> écrit :
>
> This may help
>
> Spark rlike() Working with Regex Matching Example
> s
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 1 Apr 2023 at 19:32, Philippe de Rochambeau 
> wrote:
>
>> Hello,
>> I’m looking for an efficient way in Spark to search for a series of
>> telephone numbers, contained in a CSV file, in a data set column.
>>
>> In pseudo code,
>>
>> for tel in [tel1, tel2, …. tel40,000]
>> search for tel in dataset using .like(« %tel% »)
>> end for
>>
>> I’m using the like function because the telephone numbers in the data set
>> main contain prefixes, such as « + « ; e.g., « +331222 ».
>>
>> Any suggestions would be welcome.
>>
>> Many thanks.
>>
>> Philippe
>>
>>
>>
>>
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>


Re: Looping through a series of telephone numbers

2023-04-02 Thread Sean Owen
That won't work, you can't use Spark within Spark like that.
If it were exact matches, the best solution would be to load both datasets
and join on telephone number.
For this case, I think your best bet is a UDF that contains the telephone
numbers as a list and decides whether a given number matches something in
the set. Then use that to filter, then work with the data set.
There are probably clever fast ways of efficiently determining if a string
is a prefix of a group of strings in Python you could use too.

On Sun, Apr 2, 2023 at 3:17 AM Philippe de Rochambeau 
wrote:

> Many thanks, Mich.
> Is « foreach »  the best construct to  lookup items is a dataset  such as
> the below «  telephonedirectory » data set?
>
> val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  
> tel3 » …)) // the telephone sequence
>
> // was read for a CSV file
>
> val ds = spark.read.parquet(«  /path/to/telephonedirectory » )
>
>   rdd .foreach(tel => {
> longAcc.select(«  * » ).rlike(«  + »  + tel)
>   })
>
>
>
>
> Le 1 avr. 2023 à 22:36, Mich Talebzadeh  a
> écrit :
>
> This may help
>
> Spark rlike() Working with Regex Matching Example
> s
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 1 Apr 2023 at 19:32, Philippe de Rochambeau 
> wrote:
>
>> Hello,
>> I’m looking for an efficient way in Spark to search for a series of
>> telephone numbers, contained in a CSV file, in a data set column.
>>
>> In pseudo code,
>>
>> for tel in [tel1, tel2, …. tel40,000]
>> search for tel in dataset using .like(« %tel% »)
>> end for
>>
>> I’m using the like function because the telephone numbers in the data set
>> main contain prefixes, such as « + « ; e.g., « +331222 ».
>>
>> Any suggestions would be welcome.
>>
>> Many thanks.
>>
>> Philippe
>>
>>
>>
>>
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>


Re: Looping through a series of telephone numbers

2023-04-02 Thread Philippe de Rochambeau
Many thanks, Mich.
Is « foreach »  the best construct to  lookup items is a dataset  such as the 
below «  telephonedirectory » data set?

val telrdd = spark.sparkContext.parallelize(Seq(«  tel1 » , «  tel2 » , «  tel3 
» …)) // the telephone sequence
// was read for a CSV file
val ds = spark.read.parquet(«  /path/to/telephonedirectory » )
  
  rdd .foreach(tel => {
longAcc.select(«  * » ).rlike(«  + »  + tel)
  })



> Le 1 avr. 2023 à 22:36, Mich Talebzadeh  a écrit :
> 
> This may help
> 
> Spark rlike() Working with Regex Matching Example 
> s
> Mich Talebzadeh,
> Lead Solutions Architect/Engineering Lead
> Palantir Technologies Limited
> 
>view my Linkedin profile 
> 
> 
>  https://en.everybodywiki.com/Mich_Talebzadeh
> 
>  
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
> 
> On Sat, 1 Apr 2023 at 19:32, Philippe de Rochambeau  > wrote:
>> Hello,
>> I’m looking for an efficient way in Spark to search for a series of 
>> telephone numbers, contained in a CSV file, in a data set column.
>> 
>> In pseudo code,
>> 
>> for tel in [tel1, tel2, …. tel40,000] 
>> search for tel in dataset using .like(« %tel% »)
>> end for 
>> 
>> I’m using the like function because the telephone numbers in the data set 
>> main contain prefixes, such as « + « ; e.g., « +331222 ».
>> 
>> Any suggestions would be welcome.
>> 
>> Many thanks.
>> 
>> Philippe
>> 
>> 
>> 
>> 
>> 
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org 
>> 
>>