Home » Server Options » Text & interMedia » File_DataStore Performance. (Oracle 10.2.0.3)
File_DataStore Performance. [message #358001] Fri, 07 November 2008 11:34 Go to next message
redonisc
Messages: 20
Registered: March 2008
Location: Guatemala, C.A.
Junior Member
Hi, i've been using a file_datastore, and hoping to achive 135 folders containing about 279k archives on them(total). With mixed files xls, doc, pdf, every file size is less than 4MB.

Right now it has 81,850 files loaded, and the performance is very very slow, i've tested several times and it takes 1.2 min average... do you have any suggestion?

I did it with(No errors on the script):
Quote:

BEGIN
CTX_DDL.CREATE_PREFERENCE( 'GC_ConcursoDoc_DStore', 'FILE_DATASTORE' );
END;
/
CREATE TABLE GC_ConcursoDoc_Idx (
id NUMBER,
NOG_CONCURSO number(10),
fecha_upload date,
filesize VARCHAR2(20),
mime VARCHAR2(50),
path_archivo VARCHAR2(255),
CONSTRAINT doc_pk PRIMARY KEY (id)
)
/
CREATE INDEX GC_ConcursoDoc_CTX ON GC_ConcursoDoc_Idx(path_archivo) INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS ('FILTER CTXSYS.AUTO_FILTER DATASTORE GC_ConcursoDoc_DStore LEXER GTCProd_lex SYNC (ON COMMIT)')
/
CREATE SEQUENCE GC_CONCURSODOC_SEQ
START WITH 1 INCREMENT BY 1 MINVALUE 1 NOCACHE NOCYCLE NOORDER
/
CREATE OR REPLACE PROCEDURE Loadfile_Concurso
(
p_nog IN GC_ConcursoDoc_Idx.Nog_Concurso%TYPE,
p_file_name IN GC_ConcursoDoc_Idx.path_archivo%TYPE,
p_upload_date IN GC_ConcursoDoc_Idx.fecha_upload%TYPE,
p_filesize IN GC_ConcursoDoc_Idx.filesize%TYPE,
p_mime IN GC_ConcursoDoc_Idx.mime%TYPE
) AS
index_name varchar2(20) := 'GC_ConcursoDoc_CTX';
BEGIN
INSERT INTO GC_ConcursoDoc_Idx (id, Nog_Concurso, path_archivo, fecha_upload,filesize, mime)
VALUES (GC_CONCURSODOC_SEQ.NEXTVAL, p_nog, p_file_name, p_upload_date, p_filesize, p_mime);
COMMIT;
END;
/



The index has a lot of rows: 35,994,976 rows.
Quote:
SELECT count(*) FROM dr$GC_ConcursoDoc_CTX$i



Cheers!

[Updated on: Fri, 07 November 2008 11:38]

Report message to a moderator

Re: File_DataStore Performance. [message #358009 is a reply to message #358001] Fri, 07 November 2008 13:24 Go to previous message
Barbara Boehmer
Messages: 9077
Registered: November 2002
Location: California, USA
Senior Member
It would probably be more efficient to load all of the paths, then create the index. If you are loading one file at a time and synchronizing on commit without optimizing then you are creating a highly fragmented index.
Previous Topic: BFile content index And Network Share
Next Topic: not contains and performance issue
Goto Forum:
  


Current Time: Thu Mar 28 04:38:58 CDT 2024