HBase选择Store file做compaction的算法

The algorithm is basically as follows:
 
Run over the set of all store files, from oldest to youngest
 
If there are more than 3 (hbase.hstore.compactionThreshold) store files left and the current store file is 20% larger then the sum of all younger store files, and it is larger than the memstore flush size, then we go on to the next, younger, store file and repeat step 2.
 
Once one of the conditions in step two is not valid anymore, the store files from the current one to the youngest one are the ones that will be merged together. If there are less than the compactionThreshold, no merge will be performed. There is also a limit which prevents more than 10 (hbase.hstore.compaction.max) store files to be merged in one compaction.
 
与compaction相关的配置参数,可以在Hbase-default.xml或者Hbase-site.xml进行查看或者配置。

this.minFilesToCompact = Math.max(2, conf.getInt("hbase.hstore.compaction.min", /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3)));this.majorCompactionTime = getNextMajorCompactTime();this.maxFilesToCompact = conf.getInt("hbase.hstore.compaction.max", 10);this.minCompactSize = conf.getLong("hbase.hstore.compaction.min.size", this.region.memstoreFlushSize);this.maxCompactSize = conf.getLong("hbase.hstore.compaction.max.size", Long.MAX_VALUE);

2011/7/11更新选择哪些store files去做min compaction的代码注释:

//////////////////////////////////////////////////////////////////////////////
  // Compaction
  //////////////////////////////////////////////////////////////////////////////

/**
  * Compact the StoreFiles.  This method may take some time, so the calling
  * thread must be able to block for long periods.
  *
  * <p>During this time, the Store can work as usual, getting values from
  * StoreFiles and writing new StoreFiles from the memstore.
  *
  * Existing StoreFiles are not destroyed until the new compacted StoreFile is
  * completely written-out to disk.
  *
  * <p>The compactLock prevents multiple simultaneous compactions.
  * The structureLock prevents us from interfering with other write operations.
  *
  * <p>We don't want to hold the structureLock for the whole time, as a compact()
  * can be lengthy and we want to allow cache-flushes during this period.
  *
  * @param forceMajor True to force a major compaction regardless of thresholds
  * @return row to split around if a split is needed, null otherwise
  * @throws IOException
  */
  StoreSize compact(final boolean forceMajor) throws IOException {
    boolean forceSplit = this.region.shouldForceSplit();
    boolean majorcompaction = forceMajor;
    synchronized (compactLock) { // 一次只能有一个thread进行compaction, store范围,region范围还是region server范围?
      this.lastCompactSize = 0;

// filesToCompact are sorted oldest to newest.
      List<StoreFile> filesToCompact = this.storefiles;
      if (filesToCompact.isEmpty()) {
        LOG.debug(this.storeNameStr + ": no store files to compact");
        return null;
      }

// Check to see if we need to do a major compaction on this region.
      // If so, change doMajorCompaction to true to skip the incremental
      // compacting below. Only check if doMajorCompaction is not true.
      if (!majorcompaction) {
        majorcompaction = isMajorCompaction(filesToCompact);
      }

boolean references = hasReferences(filesToCompact);
      if (!majorcompaction && !references &&
          (forceSplit || (filesToCompact.size() < compactionThreshold))) {
        return checkSplit(forceSplit);
      }

/* get store file sizes for incremental compacting selection.
      * normal skew:
      *
      *        older ----> newer
      *    _
      *    | |  _
      *    | |  | |  _
      *  --|-|- |-|- |-|---_-------_-------  minCompactSize(参数配置)
      *    | |  | |  | |  | |  _  | |
      *    | |  | |  | |  | | | | | |
      *    | |  | |  | |  | | | | | |
      */
      int countOfFiles = filesToCompact.size();
      long [] fileSizes = new long[countOfFiles];
      long [] sumSize = new long[countOfFiles];
      for (int i = countOfFiles-1; i >= 0; --i) {
        StoreFile file = filesToCompact.get(i);
        Path path = file.getPath();
        if (path == null) {
          LOG.error("Path is null for " + file);
          return null;
        }
        StoreFile.Reader r = file.getReader();
        if (r == null) {
          LOG.error("StoreFile " + file + " has a null Reader");
          return null;
        }
        fileSizes[i] = file.getReader().length();
        // calculate the sum of fileSizes[i,i+maxFilesToCompact-1) for algo
        int tooFar = i + this.maxFilesToCompact - 1;
        /**
        * e.g:sum_size保存的是相邻的maxFielsToCompact个storeFile大小的和
        * index : 0,  1,  2,  3  4,  5
        * f size: 10, 20, 15, 25, 15, 10
        * fooFar:  2,  3,  4,  5,  6,  7
        * s size: 45, 60, 55, 50, 25, 10 (maxFilesToCompact = 3, countOfFiles = 6)
        */
        sumSize[i] = fileSizes[i]
                  + ((i+1    < countOfFiles) ? sumSize[i+1]      : 0)
                  - ((tooFar < countOfFiles) ? fileSizes[tooFar] : 0);
      }

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/f1a07c90478d39354a606428e68eaeb5.html