regex - Using map() to get number of times list elements exist in a string in Python -


How often am I trying to get every item in a string in Python in a list:

< Definition (X, Paragraph) Map (tester, ['banana', 'lawnbeberry', 'passion fruit']) / Code>

returns [2, 0, 0]

What I have to do but extend it, so I can feed the paragraph value in the map ( ) Function. Now, the tester () function has a hard paragraph, is there any way to do this (maybe make an en-long list of paragraph values)? Any other ideas here?

Keep in mind that at any given time in the future, one of the array values ​​will be weighing - so instead of keeping the prices in the list, all need to keep together. / P>

Update: Article will often be 20K and the list will often have 200+ members. My thinking is that the map runs in parallel - so it will be more efficient than any serial methods.

There is a reaction to the movement of targets ("I probably need reazes because I have a near future Word delimiter will be required "):

This method parses text to get a list of all times" word ". Each word is seen in a dictionary of the target words, and if it is a target word that is counted. The time taken is o (p) + o (t) where p is the paragraph size and t is the number of words. All other solutions (except currently accepted solutions) are O (PT) except my ego-corrosive solution. (In the target, 0) targets in goals (word_regex) in re.findall (define the target, paragraph, word_reggets = r "\ w +"): match = dict Paragraph): If the word in Milan: match [word] = 1 return [matching for goal [target] in target] def counts_iter (target, paragraph, word_reg x = r "\ w +"): target = target ( To target in target 0)) (Matchbox in redfound) (word_regex, paragraph): word = matchobj.group () if the word matches: Target [word] + = 1 Goal [Goal] Goal for Goals in Goals] Finder Edition is a Storman - it is much slower than finder version.

Here is an approved solution that is expressed in standardized form and the word is enlarged with the delimiter:

  def current accepted_section_edirect (target, paragraph): def tes ter Return f (x): return f (x): return lane (re.findall (r "\ b" + x + r "\ b", s)) return f return map (tester (paragraph), target)  

which falls into the water when it is closed and can be reduced to:

  # Acceptance: # In the same form, One of the benchmark works of hughdbrown (target currently_accepted_solution_augmented_without_extra_closure, paragraph): Def tester (x): return lane (refundle (r "\ b" + + + r "b", paragraph)) return map (tester, target )  

All changes to the current accepted solution are o (PT). Contrary to the currently accepted solution, regex search with term delimiter is not a simple paragraph.find (target) . Again the engine does not use "sharp search" in this case, because this change fron very slow.

to add slow word delimiter

Comments

Popular posts from this blog

c# - ListView onScroll event -

PHP - get image from byte array -

Linux Terminal Problem with Non-Canonical Terminal I/O app -