i have transfer huge files (2gb-ish) web service:
public bool uploadcontent(system.web.httpcontext context) { var file = context.request.files[0]; var filename = file.filename; byte[] filebytes = new byte[file.contentlength]; file.inputstream.read(filebytes, 0, filebytes.length); client.createresource(filebytes); }
the httpcontext
has contents of file in file[0]
, can't see way pass bytes createresource(byte[] contents)
method of web service without making copy byte
array... eating memory candy.
is there more efficient way this?
edit client.createresource() part of cots product , modification outside our control.
rather sending whole bytes can send chunks of files. seek file step step upload , merge next chunk save bytes on server. need update client.createresource method if you're allowed modify method :)
add following parameters: string filename // locate file name when start sending chunks byte[] buffer // chunk sent server via webservice long offset // information tell how data uploaded, can seek file , merge buffer.
now method like:
public bool createresource(string filename, byte[] buffer, long offset) { bool retval = false; try { string filepath = "d:\\temp\\uploadtest.extension"; if (offset == 0) file.create(filepath).close(); // open file stream , write buffer. // don't open filemode.append because transfer may wish // start different point using (filestream fs = new filestream(filepath, filemode.open, fileaccess.readwrite, fileshare.read)) { fs.seek(offset, seekorigin.begin); fs.write(buffer, 0, buffer.length); } retval = true; } catch (exception ex) { // log exception or send error message cares } return retval; }
now read file in chunks inputstream of httppostedfile try below code:
public bool uploadcontent(system.web.httpcontext context) { //the file want upload var file = context.request.files[0]; var fs = file.inputstream; int offset = 0; // starting offset. //define chunk size int chunksize = 65536; // 64 * 1024 kb //define buffer array according chunksize. byte[] buffer = new byte[chunksize]; //opening file read. try { long filesize = file.contentlength; // file size of file being uploaded. // reading file. fs.position = offset; int bytesread = 0; while (offset != filesize) // continue uploading file chunks until offset = file size. { bytesread = fs.read(buffer, 0, chunksize); // read next chunk if (bytesread != buffer.length) { chunksize = bytesread; byte[] trimmedbuffer = new byte[bytesread]; array.copy(buffer, trimmedbuffer, bytesread); buffer = trimmedbuffer; // trimmed buffer should become new 'buffer' } // send chunk server. sent byte[] parameter, // client , server have been configured encode byte[] using mtom. bool chunkappened = client.createresource(file.filename, buffer, offset); if (!chunkappened) { break; } // offset updated after successful send of bytes. offset += bytesread; // save offset position resume } } catch (exception ex) { } { fs.close(); } }
disclaimer: haven't tested code. sample code show how large file upload can achieved without hampering memory.
ref: source article.