I think Karen's on the right track.  If you SCONNECT / SATTACH to the csv 
file (or the Excel file) using the ODBC driver, as long you you don't need 
to specify column aliases, I don't think it's important how many columns 
are there.  You could then get the table_id from sys_tables, then cursor 
through sys_columns where sys_table_id = table_id.   You'd have to make 
sure those item part numbers don't contain illegal characters or spaces 
where you would need to use column aliases to connect.

Dawn Hast



[email protected] wrote on 01/12/2010 09:36:08 AM:

> I believe the data to be consistent.  This spread sheet is generated
> by a computer
> program and the actual item numbers are used.   I am not 100% sure 
> that they always
> come in the same order however.  In other words, they may come in 
> part number order.
> So if a new part number was added to the spread sheet and it was a 
> "low" number,
> it would shift all other part numbers to the right.
> 
> -Bob
> 
> ----- Original Message -----
> From: "MDRD" <[email protected]>
> To: "RBASE-L Mailing List" <[email protected]>
> Sent: Tuesday, January 12, 2010 9:17:03 AM GMT -06:00 US/Canada Central
> Subject: [RBASE-L] - Re: Large Spread sheet matrix data

> 
> >> ItemA, ItemB  etc. is the actual item part number. 
> But will they always code this the same or could the item be A-100 
> or A100 or A 100 depending
> on their mood? 
> 
> Marc
> 
> 
> From: [email protected] 
> Sent: Tuesday, January 12, 2010 8:57 AM
> To: RBASE-L Mailing List 
> Subject: [RBASE-L] - Re: Large Spread sheet matrix data
> 
> I guess my question is more geared towards a longer term, "no 
> intervention required" solution.
> 
> I.E.  I can manually import the spread sheet using ODBC or Gateway 
> etc. as you mention.
> However, since the number of columns could change, I am trying to 
> come up with a solution
> that would handle that situation automatically.   Any apps that have
> columns hard coded would have to be
> changed each time an item# (i.e. column) was added or removed.  I am
> attempting to come
> up with a system that would allow the customer to send these spread 
> sheets in on a regular
> basis and have Rbase seamlessly integrate regardless of the number 
> of columns without me
> having to modify the program each time.
> 
> The spread sheet has a header row..
> Store#    ItemA   ItemB  ItemC  etc. 
> where ItemA, ItemB  etc. is the actual item part number. 
> 
> So the system would need to read the spread sheet header row to 
> determine what each column
> is. (As these could change)   Then load the data rows in 
> accordingly.  Not an easy task, but 
> perhaps possible.  So that is the first question..... is it feasible
> to load a changing matrix 
> automatically?
> 
> The second question, is data design.    Is a simple table (Store#, 
> Item#) with 300,000 plus rows
> the best design?  Of the 300,000 rows, there would be only 1950 
> unique store numbers and only
> 157 unique item numbers?
> 
> Thanks-
> 
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: "James Bentley" <[email protected]>
> To: "RBASE-L Mailing List" <[email protected]>
> Sent: Tuesday, January 12, 2010 8:36:02 AM GMT -06:00 US/Canada Central
> Subject: [RBASE-L] - Re: Large Spread sheet matrix data

> Bob,
> 
> You might create an ODBC connection to the spread sheet. I believe 
> there is a Microsoft item to do that.  If not unload the spread 
> sheet as a CSV and use the Microsoft CSV/TEXT ODBC connector.
> you would use the ODBC connection to define a 158 column table Store
> Number and the 157 items.
> 
> You then can define various views to access the data.  The UNION 
> statement could be effectively used to get transform the column data
> to row data by store.
> 
> Jim Bentley
> American Celiac Society
> [email protected]
> tel: 1-504-737-3293 
> 
> From: "[email protected]" <[email protected]>
> To: RBASE-L Mailing List <[email protected]>
> Sent: Tue, January 12, 2010 8:01:08 AM
> Subject: [RBASE-L] - Large Spread sheet matrix data
> 
> 
> 
> I have an application where a supplied data source is an Excel 
spreadsheet.
> This spread sheet is a matrix consisting of locations as rows and 
> stocked items
> as columns.       (I have no control over the spread sheet design!)
> 
> Store#   ItemA   ItemB   ItemC ItemD.....
> 0100      Y         N            N         Y
> 0200      Y         Y            N         N
> 0300      N         Y            Y         N
> 0400      N         N            N        Y
> 
> etc.
> 
> 
> This spread sheet is updated regularly, with stores added or removed
> (# of rows), stock
> items added or removed (# of columns)  and the store information 
> changed. (Y/N)
> 
> Currently there are 1950 rows in the spread sheet and 157 items 
(columns).
> 
> I need to link Rbase to this spreadsheet and be able to run routines
> against store# and
> what items each store stocks, etc.  Such information such as # of 
> stores that ItemA is stocked
> in, how many items Store 0100 has. etc. etc.
> 
> Any thoughts on an efficient method as to link this continually 
> changing data into
> Rbase table(s).  ?   A simple table with Store Number and Item number 
would
> have 306150 rows of data. (1950 x 157) and would also be difficult to 
populate
> from the above matrix.
> 
> Innovative ideas appreciated!
> 
> Thanks-
> Bob
> 
> 
> 
> 
DISCLAIMER: This electronic message together with any attachments is 
confidential. If you are not the intended recipient, do not copy, disclose or 
use the contents in any way. Please also advise us by return e-mail that you 
have received the message and then please destroy. Evergreen Packaging is not 
responsible for any changes made to this message and / or any attachments after 
sending by Evergreen Packaging. We use virus scanning software but exclude all 
liability for viruses or anything similar in this email or any attachment.

Reply via email to