SDFS Using Modify Guide (3)

Screen Shot 2018-10-21 at 10.24.14

For standalone model, SDFS will use the BatchFileChunkStore implements.

In BatchFileChunkStore.java , SDFS implements the functions to store unique chunks for all files in the mounted volume. And the encrypt and compress method could be setting here (default is No-encrypt).

public class BatchFileChunkStore implements AbstractChunkStore, AbstractBatchStore, Runnable { private String name; boolean compress = false; boolean encrypt = false; private HashMap<Long, Integer> deletes = new HashMap<Long, Integer>(); boolean closed = false; boolean deleteUnclaimed = true; File staged_sync_location = new File(Main.chunkStore + File.separator + "syncstaged"); File container_location = new File(Main.chunkStore); int checkInterval = 15000; public boolean clustered; private int mdVersion = 0;

Whenever any unique chunk wants to store to the logic disk, SDFS will call writeChunk function in BatchFileChunkStore.java line 150.

@Override public long writeChunk(byte[] hash, byte[] chunk, int len, String uuid) throws IOException { try { return HashBlobArchive.writeBlock(hash, chunk, uuid); } catch (HashExistsException e) { throw e; } catch (Exception e) { SDFSLogger.getLog().warn("error writing hash", e); throw new IOException(e); } }

The write function is in HashBlobArchive.java line 734 :

public static long writeBlock(byte[] hash, byte[] chunk, String uuid) throws IOException, ArchiveFullException, ReadOnlyArchiveException { if (closed) throw new IOException("Closed"); Lock l = slock.readLock(); l.lock(); if (uuid == null || uuid.trim() == "") { uuid = "default"; } try { for (;;) { try { HashBlobArchive ar = writableArchives.get(uuid); ar.putChunk(hash, chunk); return ar.id; } catch (HashExistsException e) { throw e; } catch (ArchiveFullException | NullPointerException | ReadOnlyArchiveException e) { if (l != null) l.unlock(); l = slock.writeLock(); l.lock(); try { HashBlobArchive ar = writableArchives.get(uuid); if (ar != null && ar.writeable) ar.putChunk(hash, chunk); else { ar = new HashBlobArchive(hash, chunk); ar.uuid = uuid; writableArchives.put(uuid, ar); } return ar.id; } catch (Exception e1) { l.unlock(); l = null; } finally { if (l != null) l.unlock(); l = null; } } catch (Throwable t) { SDFSLogger.getLog().error("unable to write", t); throw new IOException(t); } } catch (NullPointerException e) { SDFSLogger.getLog().error("unable to write data", e); throw new IOException(e); } finally { if (l != null) l.unlock(); } } Output Position

In BatchFileChunkStore.java, add bit string to hex string function bytesToHex after line 147:

private final static char[] hexArray = "0123456789ABCDEF".toCharArray(); public static String bytesToHex(byte[] bytes) { char[] hexChars = new char[bytes.length * 2]; for ( int j = 0; j < bytes.length; j++ ) { int v = bytes[j] & 0xFF; hexChars[j * 2] = hexArray[v >>> 4]; hexChars[j * 2 + 1] = hexArray[v & 0x0F]; } return new String(hexChars); }

Add output function after line 152 (in writeChunk function):

@Override public long writeChunk(byte[] hash, byte[] chunk, int len, String uuid) throws IOException { try { String metaDataPath = "/sdfsTemp/dedup/" + uuid; try { FileWriter fw = new FileWriter(metaDataPath, true); fw.write(Integer.toString(len)); fw.write("\t"); fw.write(Integer.toString(chunk.length)); fw.write("\t"); fw.write(bytesToHex(hash)); fw.write("\t"); fw.write("\n"); fw.close(); } catch (IOException e) { e.printStackTrace(); } return HashBlobArchive.writeBlock(hash, chunk, uuid); } catch (HashExistsException e) { throw e; } catch (Exception e) { SDFSLogger.getLog().warn("error writing hash", e); throw new IOException(e); } }

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zzyxgp.html