i trying extract cpu usage , timestamp message:
2015-04-27t11:54:45.036z| vmx| hist ide1 irq 4414 42902 [ 250 - 375 ) count: 2 (0.00%) min/avg/max: 250/278.50/307
i using logstash , here logstash.config file:
input { file { path => "/home/xyz/downloads/vmware.log" start_position => beginning } } filter { grok{ match => ["message", "%{@timestamp}"] } } output{ stdout { codec => rubydebug } }
but giving me grok parse error, appreciated. thanks.
as per message magnus, you're using grok match function incorrectly, @timestamp name of system field logstash uses timestamp message recieved at, not name of grok pattern.
first recommend have @ of default grok patterns can use can found here, recommend use grok debugger finally, if else fails, in #logstash irc channel (on freenode), we're pretty active in there, i'm sure out.
just out bit further, quick grok pattern have created should match example (i used grok debugger test this, results in production might not perfect - test it!)
filter { grok { match => [ "message", "%{timestamp_iso8601}\|\ %{word}\|\ %{greedydata}\ min/avg/max:\ %{number:minimum}/%{number:average}/%{number:maximum}" ] } }
to explain slightly, %{timestamp_iso8601} default grok pattern matches timestamp in example.
you notice use of \ quite lot, characters following need escaped (because we're using regex engine , spaces, pipes etc have meaning, escaping them disable meaning , use them literally).
i have used %{greedydata} pattern capture anything, can useful when want capture rest of message, if put @ end of grok pattern capture remaining text. have taken bit example (min/avg/max) stop greedydata capturing rest of message, want data after that.
%{number} capture numbers, obviously, bit after : inside curly braces defines name field given logstash , subsequently saved in elasticsearch.
i hope helps!