@InterfaceAudience.Public @InterfaceStability.Stable public abstract class CombineFileInputFormat<K,V> extends FileInputFormat<K,V>
InputFormat that returns CombineFileSplit's in
InputFormat.getSplits(JobContext) method.
Splits are constructed from the files under the input paths.
A split cannot have files from different pools.
Each split returned may contain blocks from different files.
If a maxSplitSize is specified, then blocks on the same node are
combined to form a single split. Blocks that are left over are
then combined with other blocks in the same rack.
If maxSplitSize is not specified, then blocks from the same rack
are combined in a single split; no attempt is made to create
node-local splits.
If the maxSplitSize is equal to the block size, then this class
is similar to the default splitting behavior in Hadoop: each
block is a locally processed split.
Subclasses implement
InputFormat.createRecordReader(InputSplit, TaskAttemptContext)
to construct RecordReader's for
CombineFileSplit's.CombineFileSplitFileInputFormat.Counter| Modifier and Type | Field and Description |
|---|---|
static String |
SPLIT_MINSIZE_PERNODE |
static String |
SPLIT_MINSIZE_PERRACK |
DEFAULT_LIST_STATUS_NUM_THREADS, INPUT_DIR, INPUT_DIR_NONRECURSIVE_IGNORE_SUBDIRS, INPUT_DIR_RECURSIVE, LIST_STATUS_NUM_THREADS, NUM_INPUT_FILES, PATHFILTER_CLASS, SPLIT_MAXSIZE, SPLIT_MINSIZE| Constructor and Description |
|---|
CombineFileInputFormat()
default constructor
|
| Modifier and Type | Method and Description |
|---|---|
protected void |
createPool(List<org.apache.hadoop.fs.PathFilter> filters)
Create a new pool and add the filters to it.
|
protected void |
createPool(org.apache.hadoop.fs.PathFilter... filters)
Create a new pool and add the filters to it.
|
abstract RecordReader<K,V> |
createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet.
|
protected org.apache.hadoop.fs.BlockLocation[] |
getFileBlockLocations(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat) |
List<InputSplit> |
getSplits(JobContext job)
Generate the list of files and make them into FileSplits.
|
protected boolean |
isSplitable(JobContext context,
org.apache.hadoop.fs.Path file)
Is the given filename splittable? Usually, true, but if the file is
stream compressed, it will not be.
|
protected void |
setMaxSplitSize(long maxSplitSize)
Specify the maximum size (in bytes) of each split.
|
protected void |
setMinSplitSizeNode(long minSplitSizeNode)
Specify the minimum size (in bytes) of each split per node.
|
protected void |
setMinSplitSizeRack(long minSplitSizeRack)
Specify the minimum size (in bytes) of each split per rack.
|
addInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSizepublic static final String SPLIT_MINSIZE_PERNODE
public static final String SPLIT_MINSIZE_PERRACK
protected void setMaxSplitSize(long maxSplitSize)
protected void setMinSplitSizeNode(long minSplitSizeNode)
protected void setMinSplitSizeRack(long minSplitSizeRack)
protected void createPool(List<org.apache.hadoop.fs.PathFilter> filters)
protected void createPool(org.apache.hadoop.fs.PathFilter... filters)
protected boolean isSplitable(JobContext context, org.apache.hadoop.fs.Path file)
FileInputFormatFileInputFormat always returns
true. Implementations that may deal with non-splittable files must
override this method.
FileInputFormat implementations can override this and return
false to ensure that individual input files are never split-up
so that Mappers process entire files.isSplitable in class FileInputFormat<K,V>context - the job contextfile - the file name to checkpublic List<InputSplit> getSplits(JobContext job) throws IOException
FileInputFormatgetSplits in class FileInputFormat<K,V>job - the job contextInputSplits for the job.IOExceptionpublic abstract RecordReader<K,V> createRecordReader(InputSplit split, TaskAttemptContext context) throws IOException
createRecordReader in class InputFormat<K,V>split - the split to be readcontext - the information about the taskIOExceptionprotected org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat)
throws IOException
IOExceptionCopyright © 2008–2022 Apache Software Foundation. All rights reserved.