Hi Roberto,

Em 19/4/2012 08:52, Tupy... nambá escreveu:
> Alexandre,
> At my point of view, I prefer avoid using BLOB fields. First of all, because 
> these kind of field are not indicated for searches of any kind (most of them 
> are pictures). Second,
> because
> normally they have very large content, what does the DB increase in a large 
> amount. I think the most important property of the DB´s is the capability of 
> searches. But having fields which  don´t allow us to do that, disturb the 
> funcionality of DB´s.
> I prefer using to store files outside DB´s, storing inside them the path for 
> the files. So, you have the speed at all operations (searches and 
> backup´s/restores) and not a meaningfull increase of the DB´s.
>
> I´m not sure about the reasons for the backup/restore speed problem, but I 
> believe that inside the DB happens almost the same as at OS environment = 
> when adjacent areas are full, then the OS or the DB manager application most 
> look for distant areas to store parts of the data, causing a data 
> fragmentation. And to access the complete data, the OS or DB manager must 
> "remount" them, before delivering to the client. And the DB itself suffers 
> from the DB file fragmentation at disc level.
> At file servers, normally file fragmentation are low (you don´t edit them 
> directly at the server) and still you can defragment the files. 
> At SQL server, you find discussions about internal tables and indexes 
> fragmentation, and you have commands to repair fragmentation.
> At Firebird/Interbase, nobody talks about that, but we know it happens and 
> can became a problem, when the DB is greater in size. BLOB are worst for 
> causing that, affecting not only the BLOB fields and data itself, but also 
> fields and data of other data types. And you don´t have (i never see) 
> commands for DB internal defragment.
> Try to do some experiences about that, making comparisons between different 
> solutions for a same problem. May be imediatelly filled DB will not show 
> great differences, but DB´s at common filling (day by day), after a great 
> amount of time, will show meaningfull differences. 
> Roberto Camargo,Rio de Janeiro / Brazil
>

In the past I used the approach of store just the filename, and I still 
use in some cases, but when everything is inside the datase it's easier 
to be sure that back-up/restore of everything is in place, to move the 
content around, provide transaction control (all the ACID features) that 
needs to be re-implemented if I work at filesystem level. Since you are 
in Brazil I could point a case where the need to store blob's is almost 
mandatory:
The storage of XML files of "Nota Fiscal Eletronica" (eletronic 
invoice), We need to keep the data for the legal periods specified in 
our legislation, and to handle thousands (millions ?) of individual 
files on the filesystem is not the best option in my point of view, it's 
much easier to be sure that everything is secure inside the database.

I disagree with you about the main feature of a RDBMS is search, search 
is a part of the whole system, but the main feature in my point of view 
is to store data. :) Of course there is no sense in store something if 
you cannot search for it, but, you could have a product that stores the 
data efficiently and not search it so efficiently called a RDBMS, but 
the other way around is not possible. Quoting Ann Harrison from the top 
of my head (probably not the exact words) "if you don't need a correct 
answer, the answer is 13".

I don't use Blob's that much, but in some cases I think it's a good 
sollution.

Anyway, thanks for sharing your thoughts, I know that store large binary 
data inside/outside the database is the kind of thing that there is no 
rule of thumb to choose between one or another, myself use both 
approachs for distinct use cases.

My concerns is that something is "strange" regarding blob manipulation. 
It's too slow to me.

see you !

Alexandre

Reply via email to