On 15/04/13 23:11, Eric Blake wrote:
On 04/15/2013 12:28 AM, Osier Yang wrote:
> ---
> cfg.mk | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/cfg.mk b/cfg.mk
> index 394521e..9cf4cff 100644
> --- a/cfg.mk
> +++ b/cfg.mk
> @@ -722,6 +722,20 @@ sc_prohibit_exit_in_tests:
> halt='use return, not exit(), in tests' \
> $(_sc_search_regexp)
>
> +# Don't include duplidate header in the source (either *.c or *.h)
s/duplidate/duplicate/
> +sc_prohibit_duplicate_header:
> + @if $(VC_LIST_EXCEPT) | grep -l '\.[ch]$$' > /dev/null; then \
'grep -l' is wrong. You want:
grep '\.[ch]$$'
instead (see sc_preprocessor_indenation for an example).
For that matter, the '@if ...; then check; else :; fi' is overkill; we
KNOW we have .c files so the grep will hit (that paradigm is used in
maint.mk, because maint.mk is shared among multiple projects where some
projects really do ship without C files). I'd simplify this to just the
'check' portion, that is, all you need is from here...
In case of duplicate work, I finished this, but need more work on
6/6 to send out the set.
> + for i in $$($(VC_LIST_EXCEPT) | grep '\.[ch]$$'); do \
> + for j in $$(grep '^# *include.*\.h' $$i \
> + | awk -F' ' '{print $$NF}'); do \
> + test $$(grep "^# *include $$j" $$i | wc -l) -gt 1 && \
Not the most efficient way to write this. Your way does one grep/wc
pair per header line encountered per file. But since you are already
running each file through awk, why not have awk set up a hash of all
includes it sees, and then report an error if the hash hits more than
once, all on a single awk pass per file instead of 20-30 grep passes per
file. Would you like to take a shot at it, or shall I do it since I
mentioned it?
> + { echo '$(ME): Duplidate header '$$j' in '$$i''
1>&2; \
s/Duplidate/Duplicate/
> + exit 1; } || :; \
> + done; \
> + done; \
...to here.
> + else :; \
> + fi
> +
> # We don't use this feature of maint.mk.
> prev_version_file = /dev/null
>
>