I'm trying to stream dataframes to an SQLite table and everything works
fine unless the dataframe has both numeric and character columns.
When I run:
source = DataStreams.Data.Table(df)
It works if the dataframe is all strings or all numbers. With mixed data,
however, it changes all the values
I should clarify, with mixed data, all columns starting with the first
numeric column are changed to #NULL. So the first 11 columns are text,
columns 12-15 are numeric, and then 16-20 are text. When I convert it to
Data.Table, columns 1-11 are strings and columns 12 - 20 are #NULL.
t best, you'll only see every other line, right? At worst, eachline may
> do some IO lookahead (i.e. read one line ahead) and this will do something
> even more confusing.
>
> On Thu, Jan 28, 2016 at 3:35 PM, Brandon Booth <etu...@gmail.com
> > wrote:
>
>> No rea
I'm parsing an XML file that's about 30gb and wrote the loop below to parse
it line by line. My code cycles through each line and builds a 1x200
dataframe that is appended to a larger dataframe. When the larger dataframe
gets to 1000 rows I stream it to an SQLite table. The code works for the
> Why you are using
>
> for line in eachline(f) l = readline(f)
>
>
> instead of
>
> for l in eachline(f)
>
>
> ?
>
> Best
>
> El jueves, 28 de enero de 2016, 12:42:35 (UTC-3), Brandon Booth escribió:
>>
>> I'm parsing an XML file that's ab
efined(:awards) tests for a global variable called awards, so it
> won't find a local variable called awards. What are you trying to achieve
> with this test?
>
> On Wednesday, January 13, 2016 at 12:48:50 PM UTC+10, Brandon Booth wrote:
>>
>> So this is the full function.
So this is the full function. Note lines 4 and 5 - even if I define the
"awards" and "idvs" arrays in the function, it doesn't work. If I define
them outside the function, it works fine.
Thanks.
Brandon
function parsefeed(url,st,fin)
db = SQLite.DB("/home/brandon/Documents/FPDS/fpds.db")
I've defined a function that is something like the pseudo code below. It
only works properly if i define the subset1 and subset2 arrays outside of
the function. Any suggestions on how to make it work without having to
define the arrays outside of the function? I've read some of the discussion
I tried defining the arrays in the function before the loop, but that
didn't work either. It only works if I define them outside the function.
For more context, I'm parsing data from an ATOM feed. There are several
different types of records, each with a slightly different structure. I'm
only
I'm trying to parse a series of XML files and write selected values to an
SQLite database. My code works on smaller files, but crashes when I get to
anything above about 1 GB.
I'm using Atom with the Hydrogen plugin on Julia 0.4.2.
Any suggestions on what is going wrong or alternative
never tried, but you may be able to use streaming XML parsing of the
> LibExpat.jl package to parse such a large XML file. See
> https://github.com/amitmurthy/LibExpat.jl#streaming-xml-parsing.
>
> On Tuesday, January 5, 2016 at 11:55:52 PM UTC+9, Brandon Booth wrote:
>>
>>
I'm trying to cycle through a series of large XML files, pull out selected
values, and write them to an SQLite database. My code works for smaller
files, but crashes when I get to anything larger than about 1 GB.
I put the parsing into the following function:
function iparse(file) f =
,insert into tbl values ($(join(vals,','
to do a single row.
Also note that the `create` and `append` methods are supplied to handle
uploading table-like datastructures (i.e. anything that supports size(A)
and getindex(A, i, j)).
-Jacob
On Tue, Aug 4, 2015 at 10:44 AM, Brandon Booth etu
I'm trying to insert a series of large datasets into an SQLite database. My
plan was to loop through the datasets and insert chunks of rows into the
database. I'm trying to get a single row to work and then expand it to work
with blocks of rows.
So far, this works:
v1 = vals[1,1]
v2 =
Have you installed git?
I had various issues installing Julia at work, mostly due to access rights.
Once I got admin rights to the appropriate folders everything worked.
the return type one desires. Right now the only option is
DataFrame, otherwise it defaults to a DataArray.
Cheers,
David
*From:* julia...@googlegroups.com javascript: [mailto:
julia...@googlegroups.com javascript:] *On Behalf Of *Brandon Booth
*Sent:* Tuesday, May 19, 2015 7:22 PM
. Either by passing an array of symbols as
colnames, or by passing header=false (in which case you will get auto
created colnames)
Best,
David
*From:* julia...@googlegroups.com javascript: [mailto:
julia...@googlegroups.com javascript:] *On Behalf Of *Brandon Booth
*Sent
I feel like this should be simple to do, but I can't seem to do it. I'm
using ExcelReaders and it imports as a DataArray whereas I'd like to have
the data as a DataFrame. I didn't see anything in the approximately 375
pages of methods for convert.
Thanks.
Brandon
:
julia...@googlegroups.com javascript:] *On Behalf Of *Brandon Booth
*Sent:* Tuesday, May 19, 2015 7:22 PM
*To:* julia...@googlegroups.com javascript:
*Subject:* [julia-users] Convert DataArray to DataFrame
I feel like this should be simple to do, but I can't seem to do it. I'm
using
Any chance Julia Computing is working with any government agencies? I work
for a federal agency and am making a pitch to make Julia available as an
alternative to SAS and Stata. I've been given permission to use Julia for a
current project with the expectation that I put together a business
I'm trying to read a csv file from a thumb drive using IJulia and keep
getting an error.
My code reads:
params = readcsv(/media/brandon/ED2F-0842/Parameters.csv,',')
I get the following error:
'readcsv' has no method matching readcsv(::ASCIIString, ::Char)
while loading In[10], in expression
Doh! Clearly, I'm new around here.
On Wednesday, February 25, 2015 at 2:34:45 PM UTC-5, Steven G. Johnson
wrote:
On Wednesday, February 25, 2015 at 12:37:35 PM UTC-5, Brandon Booth wrote:
I'm trying to read a csv file from a thumb drive using IJulia and keep
getting an error.
My code
22 matches
Mail list logo