Last post Apr 06, 2011 08:05 AM by TP
Apr 06, 2011 05:50 AM|minority2000uk|LINK
For the general processing I am using a XMLReader due to the size of the file and technically only need a once time read per file.
But the external company who made this starts it by doing a count of the nodes to say how many inserts/updates are to be done this is done by basically looping through the xml file counting them and then they recreate the xmlreader again to finally process
It just seems a bit clumsy to me, understandably the size of the file means we can't just load it into memory but is there another low cost method to count the nodes without having to recreate the xmlreader each time.
Could we stream the file instead and use XMLDocument or Xdocument (giving us linq capabilities) to improve the efficience as I imagine there must be some sort of performance hit re-creating a XMLReader 2 to 3 times.
For example would this use more memory than say xmlreading and looping the whole thing through to get a count.
using (XmlReader reader = XmlReader.Create(filename, settings))
XPathNavigator nav = new XPathDocument(reader).CreateNavigator();
XPathNodeIterator xPathIt = nav.Select("//root/theNode");
int c = xPathIt.Count;
Apr 06, 2011 08:05 AM|TP|LINK
From the code I can see the xPathIt object occupying loads of memory on the server , it should store the list of all the nodes somewhere in the memory to get the count later on.
Whereas if you loop through the xml to get the count using read-only it should use less memory than XPathNavigator especially when you have multiple users accessing the same file.
Hope this helps.