Java基础-IO (2)

查看当前项目,按照字典顺序排序后,再判断其子路径是文件还是目录,并输出相关结果。

18/07/19 17:35:12 INFO io.FileDemo: root: directory is D:\workspace_spark\SparkInAction\. 18/07/19 17:35:12 INFO io.FileDemo: child: .idea is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: data is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: libs is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: out is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: pom.xml is a file 18/07/19 17:35:12 INFO io.FileDemo: child: spark-warehouse is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: SparkInAction.iml is a file 18/07/19 17:35:12 INFO io.FileDemo: child: src is a directory 18/07/19 17:35:12 INFO io.FileDemo: child: target is a directory InputStream和OutputStream 全局类图

在前面表格中,罗列了Java IO中的流,可以看到相对复杂。我们先看字节流的层次关系。
整体的层次关系如下:

在上面的层次图中,为简便区分,用蓝色表示接口,红色表示抽象类,绿色表示类。
jdk1.8中,主要通过5个接口来定义区别不同的流,分别是Closeable,Flushable,Readable,Appendable。其中Closeable的父类AutoCloseable接口主要是用于基于try-with-resource的异常处理。
InputStream实现Closeable接口,而Closeable的父类AutoCloseable接口主要是用于基于try-with-resource的异常处理,在后续的示例代码中将会说明。

InputStream类图

具体看InputStream的类图如下:

InputStream实现Closeable接口,并有五个子类,而其子类FilterInputStream作为装饰功能类,类图如下:

输入流使用基本流程

我们先看一个从文件读取的实例,以了解IO流的使用流程。
我们将上文判断当前项目下的文件是file还是directory的例子改写一下,改写读取当前项目文件夹为data并且以data开头的文件,并输出文件内容:

package com.molyeo.java.io; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.*; /** * Created by zhangkh on 2018/7/20. */ public class ByteBasedStream { static Logger logger = LoggerFactory.getLogger(ByteBasedStream.class.getName()); public static void main(String[] args) throws IOException { File path = new File("."); if (path.isDirectory()) { logger.info("root: directory is {}", path.getAbsolutePath()); String[] list; list = path.list(new DirFilter("d.*")); logger.info("File after first filter:"); for (String dirItem : list) { File file = new File(path, dirItem); if (file.isDirectory()) { logger.info("child: {} is a directory", dirItem); String[] childList = file.list(new DirFilter("data*.txt")); logger.info("File after second filter"); for (String childItem : childList) { File childFile = new File(file, childItem); if (childFile.isFile()) { logger.info("Secondary child: {} is a file", childItem); logger.info("start read file {}", childFile.getCanonicalPath()); read(childFile); } } } else { logger.info("child: {} is a file", dirItem); } } } else { logger.info("root is not a directory"); } } public static void read(File file) throws IOException { InputStream inputStream = null; try { inputStream = new FileInputStream(file); int byteData = inputStream.read(); while (byteData != -1) { logger.info("byteData={}, char result={}", byteData, (char) byteData); byteData = inputStream.read(); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (inputStream != null) { inputStream.close(); } } } }

其中data.txt文件内容为Hadoop,程序运行日志如下:

18/08/04 22:19:58 INFO io.ByteBasedStream: root: directory is D:\workspace_spark\SparkInAction\. 18/08/04 22:19:58 INFO io.ByteBasedStream: File after first filter: 18/08/04 22:19:58 INFO io.ByteBasedStream: child: data is a directory 18/08/04 22:19:58 INFO io.ByteBasedStream: File after second filter 18/08/04 22:19:58 INFO io.ByteBasedStream: Secondary child: data.txt is a file 18/08/04 22:19:58 INFO io.ByteBasedStream: start read file D:\workspace_spark\SparkInAction\data\data.txt 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=72, char result=H 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=97, char result=a 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=100, char result=d 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=111, char result=o 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=111, char result=o 18/08/04 22:19:58 INFO io.ByteBasedStream: byteData=112, char result=p

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zwjswp.html