如何求每一个节点在DAG中的最大深度

问题引入:

对于LCA大家并不陌生,也就是最近公共祖先,但我们知道,传统意义上的LCA是针对树来说的,因为每一个节点在树上才有层数这个概念,现在,比如我们要在一个有向无环图(DAG)上来寻找任意两个节点的LCA,,这个LCA该怎么定义?因为DAG上两个点的LCA可能会存在多个,不好定义,此时我们就可以根据这多个LCA在DAG中的最大深度来定义,这时,你要问了,那要是这多个LCA的最大深度也一样怎么办?这个问题不在这篇文章的讨论范围,将在后面的图论系列算法之“基于信息熵的有向无环图LCA算法实现”中讨论。

那么下来我们就要解决如何在DAG中计算每一个节点的最大深度的问题了,用dfs?小规模图可行,大规模图肯定不行,搜索效率会指数下降,那么怎么办?

其实最简单的高效的办法就是拓扑排序+动态规划,状态转移方程为dp[v] = max(dp[v], dp[u]+1),其中u–>v。原理就是:既然是DAG,那么每一次寻找入度为0的节点入队,然后计算它的深度,之后出队,反复迭代,直至所有的节点计算完毕,是不是很简单呢?

这里引入后续文章中的一个问题供大家思考:如何在一个大规模有向图中找出里面所有的非重复环(所有的环,不是强连通分量,是环)?提示:可以从强连通分量入手。

下面看本篇文章所引入问题的完整实现代码:

首先是小规模数据集的dfs实现方法,非常简单:

import java.io.BufferedReader;import java.io.FileReader;import java.io.IOException;import java.io.PrintStream;import java.util.HashMap;import java.util.Iterator;import java.util.LinkedList;import java.util.Map;import java.util.Set;public class MaxDepthDFS{String path;BufferedReader br;PrintStream out;Map<String, LinkedList<String>> graph;int maxdepth;public MaxDepthDFS(String path) {this.path = path;maxdepth = 0;try {br = new BufferedReader(new FileReader(path));graph = new HashMap<String, LinkedList<String>>();String[] strs;String str;while ((str = br.readLine()) != null) {strs = str.split("\\s");if (graph.containsKey(strs[2])) {graph.get(strs[2]).add(strs[0]);} else {LinkedList<String> linkedlist = new LinkedList<String>();linkedlist.add(strs[0]);graph.put(strs[2], linkedlist);}}//System.out.println(graph);br.close();} catch (IOException e) {e.printStackTrace();}}public void dfs(String node, int depth) {if (graph.get(node) == null) {maxdepth = Math.max(maxdepth, depth);return;}Iterator<String> it = graph.get(node).iterator();while (it.hasNext()) {String nextnode = it.next();depth++;dfs(nextnode, depth);depth–;}}public void prints(String outpath) {int cnt=0;Set<String> keys = graph.keySet();Iterator<String> it = keys.iterator();try {out = new PrintStream(outpath);while (it.hasNext()) {System.out.println(cnt++);String currentnode = it.next();maxdepth = 0;dfs(currentnode, 0);out.println(currentnode+" "+maxdepth);}out.close();} catch (IOException e) {e.printStackTrace();}}}

下面是针对大规模数据集采用拓扑排序+DP的实现方法:

import java.io.BufferedReader;import java.io.FileReader;import java.io.PrintStream;import java.util.LinkedList;import java.util.Queue;public class DepthICCalculator {int[] indegree;int[] outdegree;int[] head;int[] dp;Edge[] edge;int numnode;int numedge;public DepthICCalculator(String graphfile) {try {String line = null;BufferedReader in = new BufferedReader(new FileReader(graphfile));line = in.readLine();String[] terms = line.split("\\s");numnode = Integer.parseInt(terms[0]);numedge = Integer.parseInt(terms[1]);indegree = new int[numnode];outdegree = new int[numnode];head = new int[numnode];dp = new int[numnode];for(int i = 0; i < numnode; i++) {indegree[i] = 0;outdegree[i] = 0;head[i] = 0xffffffff;dp[i] = -0x7ffffff;}edge = new Edge[numedge];for(int i = 0; i < numedge; i++) {edge[i] = new Edge();}int e = 0;while((line = in.readLine()) != null) {terms = line.split("\\s");int y = Integer.parseInt(terms[0]);int x = Integer.parseInt(terms[1]);edge[e].v = y;edge[e].next = head[x];head[x] = e;e++;indegree[y]++;outdegree[x]++;}for (int i = 0; i < numnode; i++) {if (indegree[i] == 0)dp[i] = 0; // for root node, the depth is 0}in.close();} catch (Exception e) {e.printStackTrace();}}public void computeMaxDepth() {//这里是重点,拓扑排序+dpQueue<Integer> queue = new LinkedList<Integer>();for (int i = 0; i < numnode; i++) {if (indegree[i] == 0)queue.add(i);}while (!queue.isEmpty()) {int u = queue.poll();for (int i = head[u]; i != -1; i = edge[i].next) {int v = edge[i].v;dp[v] = Math.max(dp[v], dp[u] + 1);//dpindegree[v]–;if (indegree[v] == 0)queue.add(v);}}}public int[] getAllDepth() {return dp;}public class Edge {int v;int next;}public static void main(String[] args) throws Exception {long start = System.currentTimeMillis();DepthICCalculator cal = new DepthICCalculator("C:/working/Debator/DBpediaData/DBpediaData3.9/skos/skos_categories_en_dag_id.txt");long e = System.currentTimeMillis();System.out.println(e – start);cal.computeMaxDepth();int[] depths = cal.getAllDepth();PrintStream out = new PrintStream("C:/working/Debator/DBpediaData/DBpediaData3.9/skos/DepthIC.txt");for(int i = 0; i < depths.length; i++) {out.println(i + " " + depths[i]);}out.close();}}

你被雨淋湿的心,是否依旧。

如何求每一个节点在DAG中的最大深度

相关文章:

你感兴趣的文章:

标签云: